Science.gov

Sample records for avt-147 computational uncertainty

  1. Technical Evaluation Report for Symposium AVT-147: Computational Uncertainty in Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Radespiel, Rolf; Hemsch, Michael J.

    2007-01-01

    The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.

  2. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.

  3. Credible Computations: Standard and Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties

  4. Numerical uncertainty in computational engineering and physics

    SciTech Connect

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

  5. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  6. Uncertainty and error in computational simulations

    SciTech Connect

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  7. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  8. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  9. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  10. Applying uncertainty quantification to multiphase flow computational fluid dynamics

    SciTech Connect

    Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C

    2013-07-01

    Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.

  11. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  12. Computing uncertainties in ionosphere-airglow models: I. Electron flux and species production uncertainties for Mars

    NASA Astrophysics Data System (ADS)

    Gronoff, Guillaume; Simon Wedlund, Cyril; Mertens, Christopher J.; Lillis, Robert J.

    2012-04-01

    The ionization and excitation of atoms and molecules in the upper atmospheres of the Earth and planets are computed by a number of physical models. From these calculations, quantities measurable by dedicated satellite experiments such as airglow and electron fluxes can be derived. It is then possible to compare model and observation to derive more fundamental physical properties of the upper atmospheres, for example, the density as a function of altitude. To ensure the accuracy of these retrieval techniques, it is important to have an estimation of the uncertainty of these models and to have ways to account for these uncertainties. The complexity of kinetic models for computing the secondary production of excited state species (including ions) makes it a difficult evaluation, and studies usually neglect or underestimate it. We present here a Monte-Carlo approach to the computation of model uncertainties. As an example, we studied several aspects of the model uncertainties in the upper atmosphere of Mars, including the computed secondary electron flux and the production of the main ion species. Our simulations show the importance of improving solar flux models, especially on the energy binning and on the photon impact cross sections, which are the main sources of uncertainties on the dayside. The risk of modifying cross sections on the basis of aeronomical observations is highlighted for the case of Mars, while accurate uncertainties are shown to be crucial for the interpretation of data from the particle detectors onboard Mars Global Surveyor. Finally, it shows the importance of AtMoCiad, a public database dedicated to the evaluation of aeronomy cross section uncertainties. A detailed study of the resulting emissions cross sections uncertainties is the focus of a forthcoming paper (Gronoff et al., 2012) in which the outputs discussed in the present paper are used to compute airglow uncertainty, and the overall result is compared with the data from the SPICAM UV

  13. Uncertainty and Intelligence in Computational Stochastic Mechanics

    NASA Technical Reports Server (NTRS)

    Ayyub, Bilal M.

    1996-01-01

    Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should

  14. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  15. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  16. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    NASA Astrophysics Data System (ADS)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2017-02-01

    This work deals with an extension of the reducedorder models (ROMs) that are classically constructed by modal analysis in linear structural dynamics for which the computational models are assumed to be uncertain. It is based on a multilevel projection strategy consisting in introducing three reduced-order bases that are obtained by using a spatial filtering methodology of local displacements. This filtering involves global shape functions for the kinetic energy. The proposed multilevel stochastic ROM is constructed by using the nonparametric probabilistic approach of uncertainties. It allows for affecting a specific level of uncertainties to each type of displacements associated with the corresponding vibration regime. The proposed methodology is applied to the computational model of an automobile structure, for which the multilevel stochastic ROM is identified with respect to experimental measurements. This identification is performed by solving a statistical inverse problem.

  17. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for

  18. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  19. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  20. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the

  1. Optimal allocation of computational resources in hydrogeological models under uncertainty

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.

    2015-09-01

    Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the

  2. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  3. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  4. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  5. Computations of uncertainty mediate acute stress responses in humans.

    PubMed

    de Berker, Archy O; Rutledge, Robb B; Mathys, Christoph; Marshall, Louise; Cross, Gemma F; Dolan, Raymond J; Bestmann, Sven

    2016-03-29

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function.

  6. Computations of uncertainty mediate acute stress responses in humans

    PubMed Central

    de Berker, Archy O.; Rutledge, Robb B.; Mathys, Christoph; Marshall, Louise; Cross, Gemma F.; Dolan, Raymond J.; Bestmann, Sven

    2016-01-01

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. PMID:27020312

  7. Fast Computation of Hemodynamic Sensitivity to Lumen Segmentation Uncertainty.

    PubMed

    Sankaran, Sethuraman; Grady, Leo; Taylor, Charles A

    2015-12-01

    Patient-specific blood flow modeling combining imaging data and computational fluid dynamics can aid in the assessment of coronary artery disease. Accurate coronary segmentation and realistic physiologic modeling of boundary conditions are important steps to ensure a high diagnostic performance. Segmentation of the coronary arteries can be constructed by a combination of automated algorithms with human review and editing. However, blood pressure and flow are not impacted equally by different local sections of the coronary artery tree. Focusing human review and editing towards regions that will most affect the subsequent simulations can significantly accelerate the review process. We define geometric sensitivity as the standard deviation in hemodynamics-derived metrics due to uncertainty in lumen segmentation. We develop a machine learning framework for estimating the geometric sensitivity in real time. Features used include geometric and clinical variables, and reduced-order models. We develop an anisotropic kernel regression method for assessment of lumen narrowing score, which is used as a feature in the machine learning algorithm. A multi-resolution sensitivity algorithm is introduced to hierarchically refine regions of high sensitivity so that we can quantify sensitivities to a desired spatial resolution. We show that the mean absolute error of the machine learning algorithm compared to 3D simulations is less than 0.01. We further demonstrate that sensitivity is not predicted simply by anatomic reduction but also encodes information about hemodynamics which in turn depends on downstream boundary conditions. This sensitivity approach can be extended to other systems such as cerebral flow, electro-mechanical simulations, etc.

  8. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Spacecraft components may be damaged due to airflow produced by Environmental Control Systems (ECS). There are uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field around a spacecraft from the ECS System. This paper describes an approach to estimate the uncertainty in using CFD to predict the airflow speeds around an encapsulated spacecraft.

  9. Binding in light nuclei: Statistical NN uncertainties vs Computational accuracy

    NASA Astrophysics Data System (ADS)

    Navarro Pérez, R.; Nogga, A.; Amaro, J. E.; Ruiz Arriola, E.

    2016-08-01

    We analyse the impact of the statistical uncertainties of the the nucleon-nucleon interaction, based on the Granada-2013 np-pp database, on the binding energies of the triton and the alpha particle using a bootstrap method, by solving the Faddeev equations for 3H and the Yakubovsky equations for 4He respectively. We check that in practice about 30 samples prove enough for a reliable error estimate. An extrapolation of the well fulfilled Tjon-line correlation predicts the experimental binding of the alpha particle within uncertainties. Presented by RNP at Workshop for young scientists with research interests focused on physics at FAIR 14-19 February 2016 Garmisch-Partenkirchen (Germany).

  10. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  11. Computer simulations in room acoustics: concepts and uncertainties.

    PubMed

    Vorländer, Michael

    2013-03-01

    Geometrical acoustics are used as a standard model for room acoustic design and consulting. Research on room acoustic simulation focuses on a more accurate modeling of propagation effects such as diffraction and other wave effects in rooms, and on scattering. Much progress was made in this field so that wave models also (for example, the boundary element method and the finite differences in time domain) can now be used for higher frequencies. The concepts and implementations of room simulation methods are briefly reviewed. After all, simulations in architectural acoustics are indeed powerful tools, but their reliability depends on the skills of the operator who has to create an adequate polygon model and has to choose the correct input data of boundary conditions such as absorption and scattering. Very little is known about the uncertainty of this input data. With the theory of error propagation of uncertainties it can be shown that prediction of reverberation times with accuracy better than the just noticeable difference requires input data in a quality which is not available from reverberation room measurements.

  12. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.

    2013-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around

  13. Establishing performance requirements of computer based systems subject to uncertainty

    SciTech Connect

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  14. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics

  15. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions

  16. Effect of Random Geometric Uncertainty on the Computational Design of a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, C. R.; Newman, P. A.; Hou, G. J.-W.

    2002-01-01

    The effect of geometric uncertainty due to statistically independent, random, normally distributed shape parameters is demonstrated in the computational design of a 3-D flexible wing. A first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic / finite element structural codes for both analysis and sensitivity analysis. First-order sensitivity derivatives obtained by automatic differentiation are used in the input uncertainty propagation. These propagated uncertainties are then used to perform a robust design of a simple 3-D flexible wing at supercritical flow conditions. The effect of the random input uncertainties is shown by comparison with conventional deterministic design results. Sample results are shown for wing planform, airfoil section, and structural sizing variables.

  17. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  18. Computed tomography and patient risk: Facts, perceptions and uncertainties.

    PubMed

    Power, Stephen P; Moloney, Fiachra; Twomey, Maria; James, Karl; O'Connor, Owen J; Maher, Michael M

    2016-12-28

    Since its introduction in the 1970s, computed tomography (CT) has revolutionized diagnostic decision-making. One of the major concerns associated with the widespread use of CT is the associated increased radiation exposure incurred by patients. The link between ionizing radiation and the subsequent development of neoplasia has been largely based on extrapolating data from studies of survivors of the atomic bombs dropped in Japan in 1945 and on assessments of the increased relative risk of neoplasia in those occupationally exposed to radiation within the nuclear industry. However, the association between exposure to low-dose radiation from diagnostic imaging examinations and oncogenesis remains unclear. With improved technology, significant advances have already been achieved with regards to radiation dose reduction. There are several dose optimization strategies available that may be readily employed including omitting unnecessary images at the ends of acquired series, minimizing the number of phases acquired, and the use of automated exposure control as opposed to fixed tube current techniques. In addition, new image reconstruction techniques that reduce radiation dose have been developed in recent years with promising results. These techniques use iterative reconstruction algorithms to attain diagnostic quality images with reduced image noise at lower radiation doses.

  19. Computed tomography and patient risk: Facts, perceptions and uncertainties

    PubMed Central

    Power, Stephen P; Moloney, Fiachra; Twomey, Maria; James, Karl; O’Connor, Owen J; Maher, Michael M

    2016-01-01

    Since its introduction in the 1970s, computed tomography (CT) has revolutionized diagnostic decision-making. One of the major concerns associated with the widespread use of CT is the associated increased radiation exposure incurred by patients. The link between ionizing radiation and the subsequent development of neoplasia has been largely based on extrapolating data from studies of survivors of the atomic bombs dropped in Japan in 1945 and on assessments of the increased relative risk of neoplasia in those occupationally exposed to radiation within the nuclear industry. However, the association between exposure to low-dose radiation from diagnostic imaging examinations and oncogenesis remains unclear. With improved technology, significant advances have already been achieved with regards to radiation dose reduction. There are several dose optimization strategies available that may be readily employed including omitting unnecessary images at the ends of acquired series, minimizing the number of phases acquired, and the use of automated exposure control as opposed to fixed tube current techniques. In addition, new image reconstruction techniques that reduce radiation dose have been developed in recent years with promising results. These techniques use iterative reconstruction algorithms to attain diagnostic quality images with reduced image noise at lower radiation doses. PMID:28070242

  20. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  1. Computing uncertainties in ionosphere-airglow models: II. The Martian airglow

    NASA Astrophysics Data System (ADS)

    Gronoff, Guillaume; Simon Wedlund, Cyril; Mertens, Christopher J.; Barthélemy, Mathieu; Lillis, Robert J.; Witasse, Olivier

    2012-05-01

    One of the objectives of spectrometers onboard space missions is to retrieve atmospheric parameters (notably density, composition and temperature). To fulfill this objective, comparisons between observations and model results are necessary. Knowledge of these model uncertainties is therefore necessary, although usually not considered, to estimate the accuracy in planetary upper atmosphere remote sensing of these parameters. In Part I of this study, “Computing uncertainties in ionosphere-airglow models: I. Electron flux and species production uncertainties for Mars” (Gronoff et al., 2012), we presented the uncertainties in the production of excited states and ionized species from photon and electron impacts, computed with a Monte-Carlo approach, and we applied this technique to the Martian upper atmosphere. In the present paper, we present the results of propagation of these production errors to the main UV emissions and the study of other sources of uncertainties. As an example, we studied several aspects of the model uncertainties in the thermosphere of Mars, and especially the O(1S) green line (557.7 nm, with its equivalent, the trans-auroral line at 297.2 nm), the Cameron bands CO(a3Π), and CO2+(B2Σu+) doublet emissions. We first show that the excited species at the origin of these emissions are mainly produced by electron and photon impact. We demonstrate that it is possible to reduce the computation time by decoupling the different sources of uncertainties; moreover, we show that emission uncertainties can be large (>30%) because of the strong sensitivity to the production uncertainties. Our study demonstrates that uncertainty calculations are a crucial step prior to performing remote sensing in the atmosphere of Mars and the other planets and can be used as a guide to subsequent adjustments of cross sections based on aeronomical observations. Finally, we compare the simulations with observations from the SPICAM spectrometer on the Mars Express

  2. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  3. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    SciTech Connect

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty in the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.

  4. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    DOE PAGES

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less

  5. Modeling uncertainty in classification design of a computer-aided detection system

    NASA Astrophysics Data System (ADS)

    Hosseini, Rahil; Dehmeshki, Jamshid; Barman, Sarah; Mazinani, Mahdi; Qanadli, Salah

    2010-03-01

    A computerized image analysis technology suffers from imperfection, imprecision and vagueness of the input data and its propagation in all individual components of the technology including image enhancement, segmentation and pattern recognition. Furthermore, a Computerized Medical Image Analysis System (CMIAS) such as computer aided detection (CAD) technology deals with another source of uncertainty that is inherent in image-based practice of medicine. While there are several technology-oriented studies reported in developing CAD applications, no attempt has been made to address, model and integrate these types of uncertainty in the design of the system components, even though uncertainty issues directly affect the performance and its accuracy. In this paper, the main uncertainty paradigms associated with CAD technologies are addressed. The influence of the vagueness and imprecision in the classification of the CAD, as a second reader, on the validity of ROC analysis results is defined. In order to tackle the problem of uncertainty in the classification design of the CAD, two fuzzy methods are applied and evaluated for a lung nodule CAD application. Type-1 fuzzy logic system (T1FLS) and an extension of it, interval type-2 fuzzy logic system (IT2FLS) are employed as methods with high potential for managing uncertainty issues. The novelty of the proposed classification methods is to address and handle all sources of uncertainty associated with a CAD system. The results reveal that IT2FLS is superior to T1FLS for tackling all sources of uncertainty and significantly, the problem of inter and intra operator observer variability.

  6. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  7. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    NASA Astrophysics Data System (ADS)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  8. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework.

    PubMed

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-14

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  9. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation

    PubMed Central

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840

  10. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation.

    PubMed

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.

  11. Uncertainty and variability in computational and mathematical models of cardiac physiology

    PubMed Central

    Mirams, Gary R.; Pathmanathan, Pras; Gray, Richard A.; Challenor, Peter

    2016-01-01

    Key points Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome.We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge.The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools.We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome.We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. Abstract The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient‐specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety‐critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and

  12. A Novel Method for the Evaluation of Uncertainty in Dose-Volume Histogram Computation

    SciTech Connect

    Henriquez, Francisco Cutanda M.Sc. Castrillon, Silvia Vargas

    2008-03-15

    Purpose: Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. Methods and Materials: To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. Results: This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Conclusions: Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.

  13. Prediction and Uncertainty in Computational Modeling of Complex Phenomena: A Whitepaper

    SciTech Connect

    Trucano, T.G.

    1999-01-20

    This report summarizes some challenges associated with the use of computational science to predict the behavior of complex phenomena. As such, the document is a compendium of ideas that have been generated by various staff at Sandia. The report emphasizes key components of the use of computational to predict complex phenomena, including computational complexity and correctness of implementations, the nature of the comparison with data, the importance of uncertainty quantification in comprehending what the prediction is telling us, and the role of risk in making and using computational predictions. Both broad and more narrowly focused technical recommendations for research are given. Several computational problems are summarized that help to illustrate the issues we have emphasized. The tone of the report is informal, with virtually no mathematics. However, we have attempted to provide a useful bibliography that would assist the interested reader in pursuing the content of this report in greater depth.

  14. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  15. Flood risk assessment at the regional scale: Computational challenges and the monster of uncertainty

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Papalexiou, Simon-Michael; Markonis, Yiannis; Koukouvinos, Antonis; Vasiliades, Lampros; Papaioannou, George; Loukas, Athanasios

    2016-04-01

    We present a methodological framework for flood risk assessment at the regional scale, developed within the implementation of the EU Directive 2007/60 in Greece. This comprises three phases: (a) statistical analysis of extreme rainfall data, resulting to spatially-distributed parameters of intensity-duration-frequency (IDF) relationships and their confidence intervals, (b) hydrological simulations, using event-based semi-distributed rainfall-runoff approaches, and (c) hydraulic simulations, employing the propagation of flood hydrographs across the river network and the mapping of inundated areas. The flood risk assessment procedure is employed over the River Basin District of Thessaly, Greece, which requires schematization and modelling of hundreds of sub-catchments, each one examined for several risk scenarios. This is a challenging task, involving multiple computational issues to handle, such as the organization, control and processing of huge amount of hydrometeorological and geographical data, the configuration of model inputs and outputs, and the co-operation of several software tools. In this context, we have developed supporting applications allowing massive data processing and effective model coupling, thus drastically reducing the need for manual interventions and, consequently, the time of the study. Within flood risk computations we also account for three major sources of uncertainty, in an attempt to provide upper and lower confidence bounds of flood maps, i.e. (a) statistical uncertainty of IDF curves, (b) structural uncertainty of hydrological models, due to varying anteceded soil moisture conditions, and (c) parameter uncertainty of hydraulic models, with emphasis to roughness coefficients. Our investigations indicate that the combined effect of the above uncertainties (which are certainly not the unique ones) result to extremely large bounds of potential inundation, thus rising many questions about the interpretation and usefulness of current flood

  16. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    SciTech Connect

    Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  17. Uncertainty Management in Seismic Vulnerability Assessment Using Granular Computing Based on Covering of Universe

    NASA Astrophysics Data System (ADS)

    Khamespanah, F.; Delavar, M. R.; Zare, M.

    2013-05-01

    Earthquake is an abrupt displacement of the earth's crust caused by the discharge of strain collected along faults or by volcanic eruptions. Earthquake as a recurring natural cataclysm has always been a matter of concern in Tehran, capital of Iran, as a laying city on a number of known and unknown faults. Earthquakes can cause severe physical, psychological and financial damages. Consequently, some procedures should be developed to assist modelling the potential casualties and its spatial uncertainty. One of these procedures is production of seismic vulnerability maps to take preventive measures to mitigate corporeal and financial losses of future earthquakes. Since vulnerability assessment is a multi-criteria decision making problem depending on some parameters and expert's judgments, it undoubtedly is characterized by intrinsic uncertainties. In this study, it is attempted to use Granular computing (GrC) model based on covering of universe to handle the spatial uncertainty. Granular computing model concentrates on a general theory and methodology for problem solving as well as information processing by assuming multiple levels of granularity. Basic elements in granular computing are subsets, classes, and clusters of a universe called elements. In this research GrC is used for extracting classification rules based on seismic vulnerability with minimum entropy to handle uncertainty related to earthquake data. Tehran was selected as the study area. In our previous research, Granular computing model based on a partition model of universe was employed. The model has some kinds of limitations in defining similarity between elements of the universe and defining granules. In the model similarity between elements is defined based on an equivalence relation. According to this relation, two objects are similar based on some attributes, provided for each attribute the values of these objects are equal. In this research a general relation for defining similarity between

  18. Anthropometric approaches and their uncertainties to assigning computational phantoms to individual patients in pediatric dosimetry studies

    NASA Astrophysics Data System (ADS)

    Whalen, Scott; Lee, Choonsik; Williams, Jonathan L.; Bolch, Wesley E.

    2008-01-01

    Current efforts to reconstruct organ doses in children undergoing diagnostic imaging or therapeutic interventions using ionizing radiation typically rely upon the use of reference anthropomorphic computational phantoms coupled to Monte Carlo radiation transport codes. These phantoms are generally matched to individual patients based upon nearest age or sometimes total body mass. In this study, we explore alternative methods of phantom-to-patient matching with the goal of identifying those methods which yield the lowest residual errors in internal organ volumes. Various thoracic and abdominal organs were segmented and organ volumes obtained from chest-abdominal-pelvic (CAP) computed tomography (CT) image sets from 38 pediatric patients ranging in age from 2 months to 15 years. The organs segmented included the skeleton, heart, kidneys, liver, lungs and spleen. For each organ, least-squared regression lines, 95th percentile confidence intervals and 95th percentile prediction intervals were established as a function of patient age, trunk volume, estimated trunk mass, trunk height, and three estimates of the ventral body cavity volume based on trunk height alone, or in combination with circumferential, width and/or breadth measurements in the mid-chest of the patient. When matching phantom to patient based upon age, residual uncertainties in organ volumes ranged from 53% (lungs) to 33% (kidneys), and when trunk mass was used (surrogate for total body mass as we did not have images of patient head, arms or legs), these uncertainties ranged from 56% (spleen) to 32% (liver). When trunk height is used as the matching parameter, residual uncertainties in organ volumes were reduced to between 21 and 29% for all organs except the spleen (40%). In the case of the lungs and skeleton, the two-fold reduction in organ volume uncertainties was seen in moving from patient age to trunk height—a parameter easily measured in the clinic. When ventral body cavity volumes were used

  19. Uncertainty quantification through the Monte Carlo method in a cloud computing setting

    NASA Astrophysics Data System (ADS)

    Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hélio; Breitman, Karin

    2014-05-01

    The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.

  20. Identification of intestinal wall abnormalities and ischemia by modeling spatial uncertainty in computed tomography imaging findings.

    PubMed

    Tsunoyama, Taichiro; Pham, Tuan D; Fujita, Takashi; Sakamoto, Tetsuya

    2014-10-01

    Intestinal abnormalities and ischemia are medical conditions in which inflammation and injury of the intestine are caused by inadequate blood supply. Acute ischemia of the small bowel can be life-threatening. Computed tomography (CT) is currently a gold standard for the diagnosis of acute intestinal ischemia in the emergency department. However, the assessment of the diagnostic performance of CT findings in the detection of intestinal abnormalities and ischemia has been a difficult task for both radiologists and surgeons. Little effort has been found in developing computerized systems for the automated identification of these types of complex gastrointestinal disorders. In this paper, a geostatistical mapping of spatial uncertainty in CT scans is introduced for medical image feature extraction, which can be effectively applied for diagnostic detection of intestinal abnormalities and ischemia from control patterns. Experimental results obtained from the analysis of clinical data suggest the usefulness of the proposed uncertainty mapping model.

  1. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  2. On correlated sources of uncertainty in four dimensional computed tomography data sets.

    PubMed

    Ehler, Eric D; Tome, Wolfgang A

    2010-06-01

    The purpose of this work is to estimate the degree of uncertainty inherent to a given four dimensional computed tomography (4D-CT) imaging modality and to test for interaction of the investigated factors (i.e., object displacement, velocity, and the period of motion) when determining the object motion coordinates, motion envelope, and the confomality in which it can be defined within a time based data series. A motion phantom consisting of four glass spheres imbedded in low density foam on a one dimensional moving platform was used to investigate the interaction of uncertainty factors in motion trajectory that could be used in comparison of trajectory definition, motion envelope definition and conformality in an optimal 4D-CT imaging environment. The motion platform allowed for a highly defined motion trajectory that could be as the ground truth in the comparison with observed motion in 4D-CT data sets. 4D-CT data sets were acquired for 9 different motion patterns. Multifactor analysis of variance (ANOVA) was performed where the factors considered were the phantom maximum velocity, object volume, and the image intensity used to delineate the high density objects. No statistical significance was found for three factor interaction for definition of the motion trajectory, motion envelope, or Dice Similarity Coefficient (DSC) conformality. Two factor interactions were found to be statistically significant for the DSC for the interactions of 1) object volume and the HU threshold used for delineation and 2) the object velocity and object volume. Moreover, a statistically significant single factor direct proportionality was observed between the maximum velocity and the mean tracking error. In this work multiple factors impacting on the uncertainty in 4D data sets have been considered and some statistically significant two-factor interactions have been identified. Therefore, the detailed evaluation of errors and uncertainties in 4D imaging modalities is recommended in

  3. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  4. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  5. Measuring, using, and reducing experimental and computational uncertainty in reliability analysis of composite laminates

    NASA Astrophysics Data System (ADS)

    Smarslok, Benjamin P.

    The failure of the composite hydrogen tanks on the X-33 Reusable Launch Vehicle (RLV) from combined thermal and mechanical failure modes created a situation where the design weight was highly sensitive to uncertainties. Through previous research of sensitivity and reliability analysis on this problem, three areas of potential uncertainty reduction were recognized and became the focal points for this dissertation. The transverse elastic modulus and coefficient of thermal expansion were cited as being particularly sensitive input parameters with respect to weight. Measurement uncertainty analysis was performed on transverse modulus experiments, where the intermediate thickness measurements proved to be the greatest contributor to uncertainty. Data regarding correlations in the material properties of composite laminates is not always available, however the significance of correlated properties on probability of failure was detected. Therefore, a model was developed for correlations in composite properties based on micromechanics, specifically fiber volume fraction. The correlations from fiber volume fraction were combined with experimental data to give an estimate of the complete uncertainty, including material variability and measurement error. The probability of failure was compared for correlated material properties and independent random variables in an example pressure vessel problem. Including the correlations had a significant effect on the failure probability, however being unsafe or inefficient can depend on the material system. Reliability-based design simulations often use the traditional, crude Monte Carlo method as a sampling procedure for predicting failure. The combination of designing for very small failure probabilities and (˜10-8 - 10-6) and using computational expensive finite element models, makes traditional Monte Carlo very costly. The separable Monte Carlo method, which is an extension of conditional expectation, takes advantage of statistical

  6. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  7. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  8. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  9. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    SciTech Connect

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  10. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    NASA Astrophysics Data System (ADS)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  11. "I Guess My Question Is": What Is the Co-Occurrence of Uncertainty and Learning in Computer-Mediated Discourse?

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Cheng, An-Chih Janne; Schallert, Diane; Song, Kwangok; Lee, SoonAh; Park, Yangjoo

    2014-01-01

    The purpose of this study was to contribute to a better understanding of learning in computer-supported collaborative learning (CSCL) environments by investigating the co-occurrence of uncertainty expressions and expressions of learning in a graduate course in which students collaborated in classroom computer-mediated discussions. Results showed…

  12. Experimental evaluation of the uncertainty associated with the result of feature-of-size measurements through computed tomography

    NASA Astrophysics Data System (ADS)

    Fernandes, T. L.; Donatelli, G. D.; Baldo, C. R.

    2016-07-01

    Computed tomography for dimensional metrology has been introduced in quality control loop for about a decade. Due to the complex measurement-error cause system, generally no consistent measurement uncertainty reporting has been made. The ISO 15530-3 experimental approach, which makes use of calibrated parts, has been tested for estimating the uncertainty of CT-based measurements of features of size of a test object made of POM. Particular attention is given to the design of experiment and to the measurement uncertainty components. The most significant experimental findings are outlined and discussed in this paper.

  13. Differential effects of reward and punishment in decision making under uncertainty: a computational study

    PubMed Central

    Duffin, Elaine; Bland, Amy R.; Schaefer, Alexandre; de Kamps, Marc

    2014-01-01

    Computational models of learning have proved largely successful in characterizing potential mechanisms which allow humans to make decisions in uncertain and volatile contexts. We report here findings that extend existing knowledge and show that a modified reinforcement learning model, which has separate parameters according to whether the previous trial gave a reward or a punishment, can provide the best fit to human behavior in decision making under uncertainty. More specifically, we examined the fit of our modified reinforcement learning model to human behavioral data in a probabilistic two-alternative decision making task with rule reversals. Our results demonstrate that this model predicted human behavior better than a series of other models based on reinforcement learning or Bayesian reasoning. Unlike the Bayesian models, our modified reinforcement learning model does not include any representation of rule switches. When our task is considered purely as a machine learning task, to gain as many rewards as possible without trying to describe human behavior, the performance of modified reinforcement learning and Bayesian methods is similar. Others have used various computational models to describe human behavior in similar tasks, however, we are not aware of any who have compared Bayesian reasoning with reinforcement learning modified to differentiate rewards and punishments. PMID:24600342

  14. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  15. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-08

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  16. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  17. Dose computation in conformal radiation therapy including geometric uncertainties: Methods and clinical implications

    NASA Astrophysics Data System (ADS)

    Rosu, Mihaela

    The aim of any radiotherapy is to tailor the tumoricidal radiation dose to the target volume and to deliver as little radiation dose as possible to all other normal tissues. However, the motion and deformation induced in human tissue by ventilatory motion is a major issue, as standard practice usually uses only one computed tomography (CT) scan (and hence one instance of the patient's anatomy) for treatment planning. The interfraction movement that occurs due to physiological processes over time scales shorter than the delivery of one treatment fraction leads to differences between the planned and delivered dose distributions. Due to the influence of these differences on tumors and normal tissues, the tumor control probabilities and normal tissue complication probabilities are likely to be impacted upon in the face of organ motion. In this thesis we apply several methods to compute dose distributions that include the effects of the treatment geometric uncertainties by using the time-varying anatomical information as an alternative to the conventional Planning Target Volume (PTV) approach. The proposed methods depend on the model used to describe the patient's anatomy. The dose and fluence convolution approaches for rigid organ motion are discussed first, with application to liver tumors and the rigid component of the lung tumor movements. For non-rigid behavior a dose reconstruction method that allows the accumulation of the dose to the deforming anatomy is introduced, and applied for lung tumor treatments. Furthermore, we apply the cumulative dose approach to investigate how much information regarding the deforming patient anatomy is needed at the time of treatment planning for tumors located in thorax. The results are evaluated from a clinical perspective. All dose calculations are performed using a Monte Carlo based algorithm to ensure more realistic and more accurate handling of tissue heterogeneities---of particular importance in lung cancer treatment planning.

  18. Expressing Uncertainty in Computer-Mediated Discourse: Language as a Marker of Intellectual Work

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Schallert, Diane L.; Park, Yangjoo; Lee, SoonAh; Chiang, Yueh-hui Vanessa; Cheng, An-Chih Janne; Song, Kwangok; Chu, Hsiang-Ning Rebecca; Kim, Taehee; Lee, Haekyung

    2012-01-01

    Learning and dialogue may naturally engender feelings and expressions of uncertainty for a variety of reasons and purposes. Yet, little research has examined how patterns of linguistic uncertainty are enacted and changed over time as students reciprocally influence one another and the dialogical system they are creating. This study describes the…

  19. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  20. A methodology for computing uncertainty bounds of multivariable systems based on sector stability theory concepts

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1992-01-01

    The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.

  1. Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts

    DTIC Science & Technology

    2015-07-01

    uncertainty analyses throughout the lifecycle of planning, designing, and operating of Civil Works flood risk management projects as described in...value 95% of the time. In the frequentist approach to PE, model parameters area regarded as having true values, and their estimate is based on the...in catchment models. 1. Evaluating parameter uncertainty. Water Resources Research 19(5):1151–1172. Lee, P. M. 2012. Bayesian statistics: An

  2. Computing continuous record of discharge with quantified uncertainty using index velocity observations: A probabilistic machine learning approach

    NASA Astrophysics Data System (ADS)

    Farahmand, Touraj; Hamilton, Stuart

    2016-04-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs). In general, the index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate and recommended when more than one specific discharge can be measured for a specific stage such as backwater and unsteady flow conditions caused by but not limited to the following; stream confluences, streams flowing into lakes or reservoirs, tide-affected streams, regulated streamflows (dams or control structures), or streams affected by meteorological forcing, such as strong prevailing winds. In existing index velocity modeling techniques, two models (ratings) are required; index velocity model and stage-area model. The outputs from each of these models, mean channel velocity (Vm) and cross-sectional area (A), are then multiplied together to compute a discharge. Mean channel velocity (Vm) can generally be determined by a multivariate regression parametric model such as linear regression in the simplest case. The main challenges in the existing index velocity modeling techniques are; 1) Preprocessing and QA/QC of continuous index velocity data and synchronizing them with discharge measurements. 2) Nonlinear relationship between mean velocity and index velocity which is not uncommon at monitoring locations. 3)Model exploration and analysis in order to find the optimal regression model predictor(s) and model type (linear vs nonlinear and if nonlinear number of the parameters). 3) Model changes caused by dynamical changes in the environment (geomorphic, biological) over time 5) Deployment of the final model into the Data Management Systems (DMS) for real-time discharge calculation 6) Objective estimation of uncertainty caused by: field measurement errors; structural uncertainty; parameter uncertainty; and continuous sensor data

  3. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  4. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  5. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  6. Real-time, mixed-mode computing architecture for waveform-resolved lidar systems with total propagated uncertainty

    NASA Astrophysics Data System (ADS)

    Ortman, Robert L.; Carr, Domenic A.; James, Ryan; Long, Daniel; O'Shaughnessy, Matthew R.; Valenta, Christopher R.; Tuell, Grady H.

    2016-05-01

    We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a "mixed-mode" architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer's user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.

  7. Mathematical and Computational Tools for Predictive Simulation of Complex Coupled Systems under Uncertainty

    SciTech Connect

    Ghanem, Roger

    2013-03-25

    Methods and algorithms are developed to enable the accurate analysis of problems that exhibit interacting physical processes with uncertainties. These uncertainties can pertain either to each of the physical processes or to the manner in which they depend on each others. These problems are cast within a polynomial chaos framework and their solution then involves either solving a large system of algebraic equations or a high dimensional numerical quadrature. In both cases, the curse of dimensionality is manifested. Procedures are developed for the efficient evaluation of the resulting linear equations that advantage of the block sparse structure of these equations, resulting in a block recursive Schur complement construction. In addition, embedded quadratures are constructed that permit the evaluation of very high-dimensional integrals using low-dimensional quadratures adapted to particular quantities of interest. The low-dimensional integration is carried out in a transformed measure space in which the quantity of interest is low-dimensional. Finally, a procedure is also developed to discover a low-dimensional manifold, embedded in the initial high-dimensional one, in which scalar quantities of interest exist. This approach permits the functional expression of the reduced space in terms of the original space, thus permitting cross-scale sensitivity analysis.

  8. Development of a Computational Framework for Stochastic Co-optimization of Water and Energy Resource Allocations under Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Mahinthakumar, K.; Arumugam, S.; DeCarolis, J.

    2015-12-01

    Owing to the lack of a consistent approach to assimilate probabilistic forecasts for water and energy systems, utilization of climate forecasts for conjunctive management of these two systems is very limited. Prognostic management of these two systems presents a stochastic co-optimization problem that seeks to determine reservoir releases and power allocation strategies while minimizing the expected operational costs subject to probabilistic climate forecast constraints. To address these issues, we propose a high performance computing (HPC) enabled computational framework for stochastic co-optimization of water and energy resource allocations under climate uncertainty. The computational framework embodies a new paradigm shift in which attributes of climate (e.g., precipitation, temperature) and its forecasted probability distribution are employed conjointly to inform seasonal water availability and electricity demand. The HPC enabled cyberinfrastructure framework is developed to perform detailed stochastic analyses, and to better quantify and reduce the uncertainties associated with water and power systems management by utilizing improved hydro-climatic forecasts. In this presentation, our stochastic multi-objective solver extended from Optimus (Optimization Methods for Universal Simulators), is introduced. The solver uses parallel cooperative multi-swarm method to solve for efficient solution of large-scale simulation-optimization problems on parallel supercomputers. The cyberinfrastructure harnesses HPC resources to perform intensive computations using ensemble forecast models of streamflow and power demand. The stochastic multi-objective particle swarm optimizer we developed is used to co-optimize water and power system models under constraints over a large number of ensembles. The framework sheds light on the application of climate forecasts and cyber-innovation framework to improve management and promote the sustainability of water and energy systems.

  9. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    DOE PAGES

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; ...

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less

  10. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    SciTech Connect

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match quality scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.

  11. Can megavoltage computed tomography reduce proton range uncertainties in treatment plans for patients with large metal implants?

    NASA Astrophysics Data System (ADS)

    Newhauser, Wayne D.; Giebeler, Annelise; Langen, Katja M.; Mirkovic, Dragan; Mohan, Radhe

    2008-05-01

    Treatment planning calculations for proton therapy require an accurate knowledge of radiological path length, or range, to the distal edge of the target volume. In most cases, the range may be calculated with sufficient accuracy using kilovoltage (kV) computed tomography (CT) images. However, metal implants such as hip prostheses can cause severe streak artifacts that lead to large uncertainties in proton range. The purposes of this study were to quantify streak-related range errors and to determine if they could be avoided by using artifact-free megavoltage (MV) CT images in treatment planning. Proton treatment plans were prepared for a rigid, heterogeneous phantom and for a prostate cancer patient with a metal hip prosthesis using corrected and uncorrected kVCT images alone, uncorrected MVCT images and a combination of registered MVCT and kVCT images (the hybrid approach). Streak-induced range errors of 5-12 mm were present in the uncorrected kVCT-based patient plan. Correcting the streaks by manually assigning estimated true Hounsfield units improved the range accuracy. In a rigid heterogeneous phantom, the implant-related range uncertainty was estimated at <3 mm for both the corrected kVCT-based plan and the uncorrected MVCT-based plan. The hybrid planning approach yielded the best overall result. In this approach, the kVCT images provided good delineation of soft tissues due to high-contrast resolution, and the streak-free MVCT images provided smaller range uncertainties because they did not require artifact correction.

  12. Personalized mitral valve closure computation and uncertainty analysis from 3D echocardiography.

    PubMed

    Grbic, Sasa; Easley, Thomas F; Mansi, Tommaso; Bloodworth, Charles H; Pierce, Eric L; Voigt, Ingmar; Neumann, Dominik; Krebs, Julian; Yuh, David D; Jensen, Morten O; Comaniciu, Dorin; Yoganathan, Ajit P

    2017-01-01

    Intervention planning is essential for successful Mitral Valve (MV) repair procedures. Finite-element models (FEM) of the MV could be used to achieve this goal, but the translation to the clinical domain is challenging. Many input parameters for the FEM models, such as tissue properties, are not known. In addition, only simplified MV geometry models can be extracted from non-invasive modalities such as echocardiography imaging, lacking major anatomical details such as the complex chordae topology. A traditional approach for FEM computation is to use a simplified model (also known as parachute model) of the chordae topology, which connects the papillary muscle tips to the free-edges and select basal points. Building on the existing parachute model a new and comprehensive MV model was developed that utilizes a novel chordae representation capable of approximating regional connectivity. In addition, a fully automated personalization approach was developed for the chordae rest length, removing the need for tedious manual parameter selection. Based on the MV model extracted during mid-diastole (open MV) the MV geometric configuration at peak systole (closed MV) was computed according to the FEM model. In this work the focus was placed on validating MV closure computation. The method is evaluated on ten in vitro ovine cases, where in addition to echocardiography imaging, high-resolution μCT imaging is available for accurate validation.

  13. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a

  14. Numerical study of premixed HCCI engine combustion and its sensitivity to computational mesh and model uncertainties

    NASA Astrophysics Data System (ADS)

    Kong, Song-Charng; Reitz, Rolf D.

    2003-06-01

    This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.

  15. Reducing annotation cost and uncertainty in computer-aided diagnosis through selective iterative classification

    NASA Astrophysics Data System (ADS)

    Riely, Amelia; Sablan, Kyle; Xiaotao, Thomas; Furst, Jacob; Raicu, Daniela

    2015-03-01

    Medical imaging technology has always provided radiologists with the opportunity to view and keep records of anatomy of the patient. With the development of machine learning and intelligent computing, these images can be used to create Computer-Aided Diagnosis (CAD) systems, which can assist radiologists in analyzing image data in various ways to provide better health care to patients. This paper looks at increasing accuracy and reducing cost in creating CAD systems, specifically in predicting the malignancy of lung nodules in the Lung Image Database Consortium (LIDC). Much of the cost in creating an accurate CAD system stems from the need for multiple radiologist diagnoses or annotations of each image, since there is rarely a ground truth diagnosis and even different radiologists' diagnoses of the same nodule often disagree. To resolve this issue, this paper outlines an method of selective iterative classification that predicts lung nodule malignancy by using multiple radiologist diagnoses only for cases that can benefit from them. Our method achieved 81% accuracy while costing only 46% of the method that indiscriminately used all annotations, which achieved a lower accuracy of 70%, while costing more.

  16. Uncertainty in aspiration efficiency estimates from torso simplifications in computational fluid dynamics simulations.

    PubMed

    Anderson, Kimberly R; Anthony, T Renée

    2013-03-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air's upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s(-1) and breathing velocities at 1.81 and 12.11 m s(-1) to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models.

  17. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  18. ESTIMATION OF INTERNAL EXPOSURE TO URANIUM WITH UNCERTAINTY FROM URINALYSIS DATA USING THE InDEP COMPUTER CODE

    PubMed Central

    Anderson, Jeri L.; Apostoaei, A. Iulian; Thomas, Brian A.

    2015-01-01

    The National Institute for Occupational Safety and Health (NIOSH) is currently studying mortality in a cohort of 6409 workers at a former uranium processing facility. As part of this study, over 220 000 urine samples were used to reconstruct organ doses due to internal exposure to uranium. Most of the available computational programs designed for analysis of bioassay data handle a single case at a time, and thus require a significant outlay of time and resources for the exposure assessment of a large cohort. NIOSH is currently supporting the development of a computer program, InDEP (Internal Dose Evaluation Program), to facilitate internal radiation exposure assessment as part of epidemiological studies of both uranium- and plutonium-exposed cohorts. A novel feature of InDEP is its batch processing capability which allows for the evaluation of multiple study subjects simultaneously. InDEP analyses bioassay data and derives intakes and organ doses with uncertainty estimates using least-squares regression techniques or using the Bayes’ Theorem as applied to internal dosimetry (Bayesian method). This paper describes the application of the current version of InDEP to formulate assumptions about the characteristics of exposure at the study facility that were used in a detailed retrospective intake and organ dose assessment of the cohort. PMID:22683620

  19. The VIMOS VLT deep survey. Computing the two point correlation statistics and associated uncertainties

    NASA Astrophysics Data System (ADS)

    Pollo, A.; Meneux, B.; Guzzo, L.; Le Fèvre, O.; Blaizot, J.; Cappi, A.; Iovino, A.; Marinoni, C.; McCracken, H. J.; Bottini, D.; Garilli, B.; Le Brun, V.; Maccagni, D.; Picat, J. P.; Scaramella, R.; Scodeggio, M.; Tresse, L.; Vettolani, G.; Zanichelli, A.; Adami, C.; Arnaboldi, M.; Arnouts, S.; Bardelli, S.; Bolzonella, M.; Charlot, S.; Ciliegi, P.; Contini, T.; Foucaud, S.; Franzetti, P.; Gavignaud, I.; Ilbert, O.; Marano, B.; Mathez, G.; Mazure, A.; Merighi, R.; Paltani, S.; Pellò, R.; Pozzetti, L.; Radovich, M.; Zamorani, G.; Zucca, E.; Bondi, M.; Bongiorno, A.; Busarello, G.; Gregorini, L.; Lamareille, F.; Mellier, Y.; Merluzzi, P.; Ripepi, V.; Rizzo, D.

    2005-09-01

    We present a detailed description of the methods used to compute the three-dimensional two-point galaxy correlation function in the VIMOS-VLT deep survey (VVDS). We investigate how instrumental selection effects and observational biases affect the measurements and identify the methods to correct for them. We quantify the accuracy of our corrections using an ensemble of 50 mock galaxy surveys generated with the GalICS semi-analytic model of galaxy formation which incorporate the selection biases and tiling strategy of the real data. We demonstrate that we are able to recover the real-space two-point correlation function ξ(s) and the projected correlation function w_p(r_p) to an accuracy better than 10% on scales larger than 1 h-1 Mpc with the sampling strategy used for the first epoch VVDS data. The large number of simulated surveys allows us to provide a reliable estimate of the cosmic variance on the measurements of the correlation length r0 at z ˜ 1, of about 15-20% for the first epoch VVDS observation while any residual systematic effect in the measurements of r0 is always below 5%. The error estimation and measurement techniques outlined in this paper are being used in several parallel studies which investigate in detail the clustering properties of galaxies in the VVDS.

  20. Uncertainties in radiative transfer computations: consequences on the ocean color products

    NASA Astrophysics Data System (ADS)

    Dilligeard, Eric; Zagolski, Francis; Fischer, Juergen; Santer, Richard P.

    2003-05-01

    Operational MERIS (MEdium Resolution Imaging Spectrometer) level-2 processing uses auxiliary data generated by two radiative transfer tools. These two codes simulate upwelling radiances within a coupled 'Atmosphere-Ocean' system, using different approaches based on the matrix-operator method (MOMO) and the successive orders (SO) technique. Intervalidation of these two radiative transfer codes was performed in order to implement them in the MERIS level-2 processing. MOMO and SO simulations were then conducted on a set of representative test cases. Results stressed both for all test cases good agreements were observed. The scattering processes are retrieved within a few tenths of a percent. Nevertheless, some substantial discrepancies occurred if the polarization is not taken into account mainly in the Rayleigh scattering computations. A preliminary study indicates that the impact of the code inaccuracy in the water leaving radiances retrieval (a level-2 MERIS product) is large, up to 50% in relative difference. Applying the OC2 algorithm, the effect on the retrieval chlorophyll concentration is less than 10%.

  1. Computing with Epistemic Uncertainty

    DTIC Science & Technology

    2015-01-01

    Converting the crisp TPDs to epistemic intervals .............................. 5 4.3.3 Converting the fuzzy TPDs to epistemic intervals...distributions, crisp triangular probability distributions (TPD), and fuzzy TPD where the three vertices are given as crisp epistemic intervals. The...4.3.2 Converting the crisp TPDs to epistemic intervals The traditional type of confidence interval referred to above is associated with a given

  2. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task

    PubMed Central

    Radell, Milen L.; Myers, Catherine E.; Beck, Kevin D.; Moustafa, Ahmed A.; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction. PMID:27555829

  3. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  4. Uncertainty and Cognitive Control

    PubMed Central

    Mushtaq, Faisal; Bland, Amy R.; Schaefer, Alexandre

    2011-01-01

    A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the “need for control”; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders. PMID:22007181

  5. Application of an Adaptive Polynomial Chaos Expansion on Computationally Expensive Three-Dimensional Cardiovascular Models for Uncertainty Quantification and Sensitivity Analysis.

    PubMed

    Quicken, Sjeng; Donders, Wouter P; van Disseldorp, Emiel M J; Gashi, Kujtim; Mees, Barend M E; van de Vosse, Frans N; Lopata, Richard G P; Delhaas, Tammo; Huberts, Wouter

    2016-12-01

    When applying models to patient-specific situations, the impact of model input uncertainty on the model output uncertainty has to be assessed. Proper uncertainty quantification (UQ) and sensitivity analysis (SA) techniques are indispensable for this purpose. An efficient approach for UQ and SA is the generalized polynomial chaos expansion (gPCE) method, where model response is expanded into a finite series of polynomials that depend on the model input (i.e., a meta-model). However, because of the intrinsic high computational cost of three-dimensional (3D) cardiovascular models, performing the number of model evaluations required for the gPCE is often computationally prohibitively expensive. Recently, Blatman and Sudret (2010, "An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis," Probab. Eng. Mech., 25(2), pp. 183-197) introduced the adaptive sparse gPCE (agPCE) in the field of structural engineering. This approach reduces the computational cost with respect to the gPCE, by only including polynomials that significantly increase the meta-model's quality. In this study, we demonstrate the agPCE by applying it to a 3D abdominal aortic aneurysm (AAA) wall mechanics model and a 3D model of flow through an arteriovenous fistula (AVF). The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results. Cost reductions ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.

  6. THE USE OF COMPUTER MODELING PACKAGES TO ILLUSTRATE UNCERTAINTY IN RISK ASSESSMENTS: AN EASE OF USE AND INTERPRETATION COMPARISON

    EPA Science Inventory

    Consistent improvements in processor speed and computer access have substantially increased the use of computer modeling by experts and non-experts alike. Several new computer modeling packages operating under graphical operating systems (i.e. Microsoft Windows or Macintosh) m...

  7. Uncertainty analysis

    SciTech Connect

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  8. Mini-max feedback control as a computational theory of sensorimotor control in the presence of structural uncertainty

    PubMed Central

    Ueyama, Yuki

    2014-01-01

    We propose a mini-max feedback control (MMFC) model as a robust approach to human motor control under conditions of uncertain dynamics, such as structural uncertainty. The MMFC model is an expansion of the optimal feedback control (OFC) model. According to this scheme, motor commands are generated to minimize the maximal cost, based on an assumption of worst-case uncertainty, characterized by familiarity with novel dynamics. We simulated linear dynamic systems with different types of force fields–stable and unstable dynamics–and compared the performance of MMFC to that of OFC. MMFC delivered better performance than OFC in terms of stability and the achievement of tasks. Moreover, the gain in positional feedback with the MMFC model in the unstable dynamics was tuned to the direction of instability. It is assumed that the shape modulations of the gain in positional feedback in unstable dynamics played the same role as that played by end-point stiffness observed in human studies. Accordingly, we suggest that MMFC is a plausible model that predicts motor behavior under conditions of uncertain dynamics. PMID:25309415

  9. Artifacts in Conventional Computed Tomography (CT) and Free Breathing Four-Dimensional CT Induce Uncertainty in Gross Tumor Volume Determination

    SciTech Connect

    Fredberg Persson, Gitte; Nygaard, Ditte Eklund; Munch af Rosenschoeld, Per; Richter Vogelius, Ivan; Josipovic, Mirjana; Specht, Lena; Korreman, Stine Sofia

    2011-08-01

    Purpose: Artifacts impacting the imaged tumor volume can be seen in conventional three-dimensional CT (3DCT) scans for planning of lung cancer radiotherapy but can be reduced with the use of respiration-correlated imaging, i.e., 4DCT or breathhold CT (BHCT) scans. The aim of this study was to compare delineated gross tumor volume (GTV) sizes in 3DCT, 4DCT, and BHCT scans of patients with lung tumors. Methods and Materials: A total of 36 patients with 46 tumors referred for stereotactic radiotherapy of lung tumors were included. All patients underwent positron emission tomography (PET)/CT, 4DCT, and BHCT scans. GTVs in all CT scans of individual patients were delineated during one session by a single physician to minimize systematic delineation uncertainty. The GTV size from the BHCT was considered the closest to true tumor volume and was chosen as the reference. The reference GTV size was compared to GTV sizes in 3DCT, at midventilation (MidV), at end-inspiration (Insp), and at end-expiration (Exp) bins from the 4DCT scan. Results: The median BHCT GTV size was 4.9 cm{sup 3} (0.1-53.3 cm{sup 3}). Median deviation between 3DCT and BHCT GTV size was 0.3 cm{sup 3} (-3.3 to 30.0 cm{sup 3}), between MidV and BHCT size was 0.2 cm{sup 3} (-5.7 to 19.7 cm{sup 3}), between Insp and BHCT size was 0.3 cm{sup 3} (-4.7 to 24.8 cm{sup 3}), and between Exp and BHCT size was 0.3 cm{sup 3} (-4.8 to 25.5 cm{sup 3}). The 3DCT, MidV, Insp, and Exp median GTV sizes were all significantly larger than the BHCT median GTV size. Conclusions: In the present study, the choice of CT method significantly influenced the delineated GTV size, on average, leading to an increase in GTV size compared to the reference BHCT. The uncertainty caused by artifacts is estimated to be in the same magnitude as delineation uncertainty and should be considered in the design of margins for radiotherapy.

  10. Computing the Risk of Postprandial Hypo- and Hyperglycemia in Type 1 Diabetes Mellitus Considering Intrapatient Variability and Other Sources of Uncertainty

    PubMed Central

    García-Jaramillo, Maira; Calm, Remei; Bondia, Jorge; Tarín, Cristina; Vehí, Josep

    2009-01-01

    Objective The objective of this article was to develop a methodology to quantify the risk of suffering different grades of hypo- and hyperglycemia episodes in the postprandial state. Methods Interval predictions of patient postprandial glucose were performed during a 5-hour period after a meal for a set of 3315 scenarios. Uncertainty in the patient's insulin sensitivities and carbohydrate (CHO) contents of the planned meal was considered. A normalized area under the curve of the worst-case predicted glucose excursion for severe and mild hypo- and hyperglycemia glucose ranges was obtained and weighted accordingly to their importance. As a result, a comprehensive risk measure was obtained. A reference model of preprandial glucose values representing the behavior in different ranges was chosen by a ξ2 test. The relationship between the computed risk index and the probability of occurrence of events was analyzed for these reference models through 19,500 Monte Carlo simulations. Results The obtained reference models for each preprandial glucose range were 100, 160, and 220 mg/dl. A relationship between the risk index ranges <10, 10–60, 60–120, and >120 and the probability of occurrence of mild and severe postprandial hyper- and hypoglycemia can be derived. Conclusions When intrapatient variability and uncertainty in the CHO content of the meal are considered, a safer prediction of possible hyper- and hypoglycemia episodes induced by the tested insulin therapy can be calculated. PMID:20144339

  11. Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  12. A computer-aided approach to compare the production economics of fed-batch and perfusion culture under uncertainty.

    PubMed

    Lim, Ai Chye; Washbrook, John; Titchener-Hooker, Nigel John; Farid, Suzanne S

    2006-03-05

    Fed-batch and perfusion culture dominate mammalian cell culture production processes. In this paper, a decision-support tool was employed to evaluate the economic feasibility of both culture modes via a case study based upon the large-scale production of monoclonal antibodies. The trade-offs between the relative simplicity but higher start-up costs of fed-batch processes and the high productivity but higher chances of equipment failure of perfusion processes were analysed. Deterministic analysis showed that whilst there was an insignificant difference (3%) between the cost of goods per gram (COG/g) values, the perfusion option benefited from a 42% reduction in capital investment and a 12% higher projected net present value (NPV). When Monte Carlo simulations were used to account for uncertainties in titre and yield, as well as the risks of contamination and filter fouling, the frequency distributions for the output metrics revealed that neither process route offered the best of both NPV or product output. A product output criterion was formulated and the options that met the criterion were compared based on their reward/risk ratio. The perfusion option was no longer feasible as it failed to meet the product output criterion and the fed-batch option had a 100% higher reward/risk ratio. The tool indicated that in this particular case, the probabilities of contamination and fouling in the perfusion option need to be reduced from 10% to 3% for this option to have the higher reward/risk ratio. The case study highlighted the limitations of relying on deterministic analysis alone.

  13. Conundrums with uncertainty factors.

    PubMed

    Cooke, Roger

    2010-03-01

    The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets.

  14. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    SciTech Connect

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  15. A new surrogate modeling technique combining Kriging and polynomial chaos expansions - Application to uncertainty analysis in computational dosimetry

    NASA Astrophysics Data System (ADS)

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  16. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  17. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  18. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  19. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  20. Uncertainties in successive measurements

    NASA Astrophysics Data System (ADS)

    Distler, Jacques; Paban, Sonia

    2013-06-01

    When you measure an observable, A, in quantum mechanics, the state of the system changes. This, in turn, affects the quantum-mechanical uncertainty in some noncommuting observable, B. The standard uncertainty relation puts a lower bound on the uncertainty of B in the initial state. What is relevant for a subsequent measurement of B, however, is the uncertainty of B in the postmeasurement state. We re-examine this problem, both in the case where A has a pure point spectrum and in the case where A has a continuous spectrum. In the latter case, the need to include a finite detector resolution, as part of what it means to measure such an observable, has dramatic implications for the result of successive measurements. Ozawa, [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.67.042105 67, 042105 (2003)] proposed an inequality satisfied in the case of successive measurements. Among our results, we show that his inequality is ineffective (can never come close to being saturated). For the cases of interest, we compute a sharper lower bound.

  1. Orbital State Uncertainty Realism

    NASA Astrophysics Data System (ADS)

    Horwood, J.; Poore, A. B.

    2012-09-01

    Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten

  2. Pharmacological Fingerprints of Contextual Uncertainty

    PubMed Central

    Ruge, Diane; Stephan, Klaas E.

    2016-01-01

    Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses. PMID:27846219

  3. How Uncertain is Uncertainty?

    NASA Astrophysics Data System (ADS)

    Vámos, Tibor

    The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.

  4. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  5. Uncertainty Quantification for Airfoil Icing

    NASA Astrophysics Data System (ADS)

    DeGennaro, Anthony Matteo

    Ensuring the safety of airplane flight in icing conditions is an important and active arena of research in the aerospace community. Notwithstanding the research, development, and legislation aimed at certifying airplanes for safe operation, an analysis of the effects of icing uncertainties on certification quantities of interest is generally lacking. The central objective of this thesis is to examine and analyze problems in airfoil ice accretion from the standpoint of uncertainty quantification. We focus on three distinct areas: user-informed, data-driven, and computational uncertainty quantification. In the user-informed approach to uncertainty quantification, we discuss important canonical icing classifications and show how these categories can be modeled using a few shape parameters. We then investigate the statistical effects of these parameters. In the data-driven approach, we build statistical models of airfoil ice shapes from databases of actual ice shapes, and quantify the effects of these parameters. Finally, in the computational approach, we investigate the effects of uncertainty in the physics of the ice accretion process, by perturbing the input to an in-house numerical ice accretion code that we develop in this thesis.

  6. Uncertainty of empirical correlation equations

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.

    2016-08-01

    The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.

  7. Uncertainty and Engagement with Learning Games

    ERIC Educational Resources Information Center

    Howard-Jones, Paul A.; Demetriou, Skevi

    2009-01-01

    Uncertainty may be an important component of the motivation provided by learning games, especially when associated with gaming rather than learning. Three studies are reported that explore the influence of gaming uncertainty on engagement with computer-based learning games. In the first study, children (10-11 years) played a simple maths quiz.…

  8. The uncertainties in estimating measurement uncertainties

    SciTech Connect

    Clark, J.P.; Shull, A.H.

    1994-07-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties.

  9. Collaborative Project: The problem of bias in defining uncertainty in computationally enabled strategies for data-driven climate model development. Final Technical Report.

    SciTech Connect

    Huerta, Gabriel

    2016-05-10

    The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projections of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.

  10. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi

  11. Uncertainty as knowledge

    PubMed Central

    Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.

    2015-01-01

    This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108

  12. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  13. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  14. Coastal 'Big Data' and nature-inspired computation: Prediction potentials, uncertainties, and knowledge derivation of neural networks for an algal metric

    NASA Astrophysics Data System (ADS)

    Millie, David F.; Weckman, Gary R.; Young, William A.; Ivey, James E.; Fries, David P.; Ardjmand, Ehsan; Fahnenstiel, Gary L.

    2013-07-01

    Coastal monitoring has become reliant upon automated sensors for data acquisition. Such a technical commitment comes with a cost; particularly, the generation of large, high-dimensional data streams ('Big Data') that personnel must search through to identify data structures. Nature-inspired computation, inclusive of artificial neural networks (ANNs), affords the unearthing of complex, recurring patterns within sizable data volumes. In 2009, select meteorological and hydrological data were acquired via autonomous instruments in Sarasota Bay, Florida (USA). ANNs estimated continuous chlorophyll (CHL) a concentrations from abiotic predictors, with correlations between measured:modeled concentrations >0.90 and model efficiencies ranging from 0.80 to 0.90. Salinity and water temperature were the principal influences for modeled CHL within the Bay; concentrations steadily increased at temperatures >28° C and were greatest at salinities <36 (maximizing at ca. 35.3). Categorical ANNs modeled CHL classes of 6.1 and 11 μg CHL L-1 (representative of local and state-imposed constraint thresholds, respectively), with an accuracy of ca. 83% and class precision ranging from 0.79 to 0.91. The occurrence likelihood of concentrations > 6.1 μg CHL L-1 maximized at a salinity of ca. 36.3 and a temperature of ca. 29.5 °C. A 10th-order Chebyshev bivariate polynomial equation was fit (adj. r2 = 0.99, p < 0.001) to a three-dimensional response surface portraying modeled CHL concentrations, conditional to the temperature-salinity interaction. The TREPAN algorithm queried a continuous ANN to extract a decision tree for delineation of CHL classes; turbidity, temperature, and salinity (and to lesser degrees, wind speed, wind/current direction, irradiance, and urea-nitrogen) were key variables for quantitative rules in tree formalisms. Taken together, computations enabled knowledge provision for and quantifiable representations of the non-linear relationships between environmental

  15. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  16. Uncertainty Propagation for Terrestrial Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas

    2016-06-01

    Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.

  17. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  18. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM. OPERATIONAL MANUAL.

    EPA Science Inventory

    MOUSE (Modular Oriented Uncertainty SystEm) deals with the problem of uncertainties in models that consist of one or more algebraic equations. It was especially designed for use by those with little or no knowledge of computer languages or programming. It is compact (and thus can...

  19. Uncertainty estimation and prediction for interdisciplinary ocean dynamics

    SciTech Connect

    Lermusiaux, Pierre F.J. . E-mail: pierrel@pacific.harvard.edu

    2006-09-01

    Scientific computations for the quantification, estimation and prediction of uncertainties for ocean dynamics are developed and exemplified. Primary characteristics of ocean data, models and uncertainties are reviewed and quantitative data assimilation concepts defined. Challenges involved in realistic data-driven simulations of uncertainties for four-dimensional interdisciplinary ocean processes are emphasized. Equations governing uncertainties in the Bayesian probabilistic sense are summarized. Stochastic forcing formulations are introduced and a new stochastic-deterministic ocean model is presented. The computational methodology and numerical system, Error Subspace Statistical Estimation, that is used for the efficient estimation and prediction of oceanic uncertainties based on these equations is then outlined. Capabilities of the ESSE system are illustrated in three data-assimilative applications: estimation of uncertainties for physical-biogeochemical fields, transfers of ocean physics uncertainties to acoustics, and real-time stochastic ensemble predictions with assimilation of a wide range of data types. Relationships with other modern uncertainty quantification schemes and promising research directions are discussed.

  20. Uncertainty quantification for holographic interferographic images

    NASA Astrophysics Data System (ADS)

    Centauri, Laurie Ann

    Current comparison methods for experimental and simulated holographic interferometric images are qualitative in nature. Previous comparisons of holographic interferometric images with computational fluid dynamics (CFD) simulations for validation have been performed qualitatively through visual comparison by a data analyst. By validating the experiments and CFD simulations in a quantifiable manner using a consistency analysis, the validation becomes a repeatable process that gives a consistency measure and a range of inputs over which the experiments and CFD simulations give consistent results. The quantification of uncertainty in four holographic interferometric experiments was performed for use in a data collaboration with CFD simulations for the purpose of validation. The model uncertainty from image-processing, the measurement uncertainty from experimental data variation, and the scenario uncertainty from the bias and parameter uncertainty was quantified. The scenario uncertainty was determined through comparison with an analytical solution at the helium inlet (height, x = 0), including the uncertainty in the experimental parameters from historical weather data. The model uncertainty was calculated through a Box-Behnkin sensitivity analysis on three image-processing code parameters. Measurement uncertainty was determined through a statistical analysis to determine the time-average and standard deviation in the interference fringe positions. An experimental design matrix of CFD simulations was performed by Weston Eldredge using a Box-Behnkin design with helium velocity, temperature, and air co-flow velocity as parameters in conjunction to provide simulated measurements for the data collaboration Data set. Over 3,200 holographic interferometric images were processed through the course of this study. When each permutation of these images is taken into account through all the image-processing steps, the total number of images processed is over 13,000. Probability

  1. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  2. Uncertainty Quantification in Climate Modeling

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.

    2011-12-01

    We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis

  3. Analysis of the Uncertainty in the Computation of Receiver Functions and Improvement in the Estimation of Receiver, PP and SS functions

    NASA Astrophysics Data System (ADS)

    Huang, X.; Gurrola, H.

    2013-12-01

    methods. All of these methods performed well in terms of stdev but we chose ARU for its high quality data and low signal to noise ratios (the average S/N ratio for these data were 4%). With real data, we tend to assume the method that has the lowest stdev is the best. But stdev does not account for a systematic bias toward incorrect values. In this case the LSD once again had the lowest stdev in computed amplitudes of Pds phases but it had the smallest values. But the FID, FWLD and MID tended to produce the largest amplitude while the LSD and TID tended toward the lower amplitudes. Considering that in the synthetics all these methods showed bias toward low amplitude, we believe that with real data those methods producing the largest amplitudes will be closest to the 'true values' and that is a better measure of the better method than a small stdev in amplitude estimates. We will also present results for applying TID and FID methods to the production of PP and SS precursor functions. When applied to these data, it is possible to moveout correct the cross-correlation functions before extracting the signal from each PdP (or SdS) phase in these data. As a result a much cleaner Earth function is produced and feequency content is significantly improved.

  4. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  5. Addressing biological uncertainties in engineering gene circuits.

    PubMed

    Zhang, Carolyn; Tsoi, Ryan; You, Lingchong

    2016-04-18

    Synthetic biology has grown tremendously over the past fifteen years. It represents a new strategy to develop biological understanding and holds great promise for diverse practical applications. Engineering of a gene circuit typically involves computational design of the circuit, selection of circuit components, and test and optimization of circuit functions. A fundamental challenge in this process is the predictable control of circuit function due to multiple layers of biological uncertainties. These uncertainties can arise from different sources. We categorize these uncertainties into incomplete quantification of parts, interactions between heterologous components and the host, or stochastic dynamics of chemical reactions and outline potential design strategies to minimize or exploit them.

  6. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; McMillan, H. K.

    2015-09-01

    Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.

  7. MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.

    SciTech Connect

    Hemez, F.M.; Doebling, S.W.

    2000-10-01

    This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.

  8. [Ethics, empiricism and uncertainty].

    PubMed

    Porz, R; Zimmermann, H; Exadaktylos, A K

    2011-01-01

    Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine.

  9. Impact of uncertainty on modeling and testing

    NASA Technical Reports Server (NTRS)

    Coleman, Hugh W.; Brown, Kendall K.

    1995-01-01

    A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.

  10. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    SciTech Connect

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an

  11. Electoral Knowledge and Uncertainty.

    ERIC Educational Resources Information Center

    Blood, R. Warwick; And Others

    Research indicates that the media play a role in shaping the information that voters have about election options. Knowledge of those options has been related to actual vote, but has not been shown to be strongly related to uncertainty. Uncertainty, however, does seem to motivate voters to engage in communication activities, some of which may…

  12. Intolerance of Uncertainty

    PubMed Central

    Beier, Meghan L.

    2015-01-01

    Multiple sclerosis (MS) is a chronic and progressive neurologic condition that, by its nature, carries uncertainty as a hallmark characteristic. Although all patients face uncertainty, there is variability in how individuals cope with its presence. In other populations, the concept of “intolerance of uncertainty” has been conceptualized to explain this variability such that individuals who have difficulty tolerating the possibility of future occurrences may engage in thoughts or behaviors by which they attempt to exert control over that possibility or lessen the uncertainty but may, as a result, experience worse outcomes, particularly in terms of psychological well-being. This topical review introduces MS-focused researchers, clinicians, and patients to intolerance of uncertainty, integrates the concept with what is already understood about coping with MS, and suggests future steps for conceptual, assessment, and treatment-focused research that may benefit from integrating intolerance of uncertainty as a central feature. PMID:26300700

  13. Economic uncertainty and econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, Christophe

    2009-10-01

    The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.

  14. Physical Uncertainty Bounds (PUB)

    SciTech Connect

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  15. Fragility, uncertainty, and healthcare.

    PubMed

    Rogers, Wendy A; Walker, Mary J

    2016-02-01

    Medicine seeks to overcome one of the most fundamental fragilities of being human, the fragility of good health. No matter how robust our current state of health, we are inevitably susceptible to future illness and disease, while current disease serves to remind us of various frailties inherent in the human condition. This article examines the relationship between fragility and uncertainty with regard to health, and argues that there are reasons to accept rather than deny at least some forms of uncertainty. In situations of current ill health, both patients and doctors seek to manage this fragility through diagnoses that explain suffering and provide some certainty about prognosis as well as treatment. However, both diagnosis and prognosis are inevitably uncertain to some degree, leading to questions about how much uncertainty health professionals should disclose, and how to manage when diagnosis is elusive, leaving patients in uncertainty. We argue that patients can benefit when they are able to acknowledge, and appropriately accept, some uncertainty. Healthy people may seek to protect the fragility of their good health by undertaking preventative measures including various tests and screenings. However, these attempts to secure oneself against the onset of biological fragility can cause harm by creating rather than eliminating uncertainty. Finally, we argue that there are good reasons for accepting the fragility of health, along with the associated uncertainties.

  16. Quantum preparation uncertainty and lack of information

    NASA Astrophysics Data System (ADS)

    Rozpędek, Filip; Kaniewski, Jędrzej; Coles, Patrick J.; Wehner, Stephanie

    2017-02-01

    The quantum uncertainty principle famously predicts that there exist measurements that are inherently incompatible, in the sense that their outcomes cannot be predicted simultaneously. In contrast, no such uncertainty exists in the classical domain, where all uncertainty results from ignorance about the exact state of the physical system. Here, we critically examine the concept of preparation uncertainty and ask whether similarly in the quantum regime, some of the uncertainty that we observe can actually also be understood as a lack of information (LOI), albeit a lack of quantum information. We answer this question affirmatively by showing that for the well known measurements employed in BB84 quantum key distribution (Bennett and Brassard 1984 Int. Conf. on Computer System and Signal Processing), the amount of uncertainty can indeed be related to the amount of available information about additional registers determining the choice of the measurement. We proceed to show that also for other measurements the amount of uncertainty is in part connected to a LOI. Finally, we discuss the conceptual implications of our observation to the security of cryptographic protocols that make use of BB84 states.

  17. Information-theoretic approach to uncertainty importance

    SciTech Connect

    Park, C.K.; Bari, R.A.

    1985-01-01

    A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results.

  18. Utilizing general information theories for uncertainty quantification

    SciTech Connect

    Booker, J. M.

    2002-01-01

    Uncertainties enter into a complex problem from many sources: variability, errors, and lack of knowledge. A fundamental question arises in how to characterize the various kinds of uncertainty and then combine within a problem such as the verification and validation of a structural dynamics computer model, reliability of a dynamic system, or a complex decision problem. Because uncertainties are of different types (e.g., random noise, numerical error, vagueness of classification), it is difficult to quantify all of them within the constructs of a single mathematical theory, such as probability theory. Because different kinds of uncertainty occur within a complex modeling problem, linkages between these mathematical theories are necessary. A brief overview of some of these theories and their constituents under the label of Generalized lnforrnation Theory (GIT) is presented, and a brief decision example illustrates the importance of linking at least two such theories.

  19. Mutually Exclusive Uncertainty Relations

    NASA Astrophysics Data System (ADS)

    Xiao, Yunlong; Jing, Naihuan

    2016-11-01

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds.

  20. Mutually Exclusive Uncertainty Relations.

    PubMed

    Xiao, Yunlong; Jing, Naihuan

    2016-11-08

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds.

  1. Mutually Exclusive Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan

    2016-01-01

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds. PMID:27824161

  2. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  3. A surrogate-based uncertainty quantification with quantifiable errors

    SciTech Connect

    Bang, Y.; Abdel-Khalik, H. S.

    2012-07-01

    Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)

  4. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  5. Predictive uncertainty in auditory sequence processing

    PubMed Central

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  6. Communicating scientific uncertainty.

    PubMed

    Fischhoff, Baruch; Davis, Alex L

    2014-09-16

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science.

  7. Uncertainty in chemistry.

    PubMed

    Menger, Fredric M

    2010-09-01

    It might come as a disappointment to some chemists, but just as there are uncertainties in physics and mathematics, there are some chemistry questions we may never know the answer to either, suggests Fredric M. Menger.

  8. Communicating scientific uncertainty

    PubMed Central

    Fischhoff, Baruch; Davis, Alex L.

    2014-01-01

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390

  9. Uncertainty in QSAR predictions.

    PubMed

    Sahlin, Ullrika

    2013-03-01

    It is relevant to consider uncertainty in individual predictions when quantitative structure-activity (or property) relationships (QSARs) are used to support decisions of high societal concern. Successful communication of uncertainty in the integration of QSARs in chemical safety assessment under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system can be facilitated by a common understanding of how to define, characterise, assess and evaluate uncertainty in QSAR predictions. A QSAR prediction is, compared to experimental estimates, subject to added uncertainty that comes from the use of a model instead of empirically-based estimates. A framework is provided to aid the distinction between different types of uncertainty in a QSAR prediction: quantitative, i.e. for regressions related to the error in a prediction and characterised by a predictive distribution; and qualitative, by expressing our confidence in the model for predicting a particular compound based on a quantitative measure of predictive reliability. It is possible to assess a quantitative (i.e. probabilistic) predictive distribution, given the supervised learning algorithm, the underlying QSAR data, a probability model for uncertainty and a statistical principle for inference. The integration of QSARs into risk assessment may be facilitated by the inclusion of the assessment of predictive error and predictive reliability into the "unambiguous algorithm", as outlined in the second OECD principle.

  10. Uncertainty Analysis for a Jet Flap Airfoil

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Cruz, Josue

    2006-01-01

    An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.

  11. Calculational methodology and associated uncertainties: Sensitivity and uncertainty analysis of reactor performance parameters

    SciTech Connect

    Kujawski, E.; Weisbin, C.R.

    1982-01-01

    This chapter considers the calculational methodology and associated uncertainties both for the design of large LMFBR's and the analysis of critical assemblies (fast critical experiments) as performed by several groups within the US. Discusses cross-section processing; calculational methodology for the design problem; core physics computations; design-oriented approximations; benchmark analyses; and determination of calculational corrections and associated uncertainties for a critical assembly. Presents a detailed analysis of the sources of calculational uncertainties for the critical assembly ZPR-6/7 to illustrate the quantitative assessment of calculational correction factors and uncertainties. Examines calculational uncertainties that arise from many different sources including intrinsic limitations of computational methods; design-oriented approximations related to reactor modeling; computational capability and code availability; economic limitations; and the skill of the reactor analyst. Emphasizes that the actual design uncertainties in most of the parameters, with the possible exception of burnup, are likely to be less than might be indicated by the results presented in this chapter because reactor designers routinely apply bias factors (usually derived from critical experiments) to their calculated results.

  12. Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.

  13. Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures

    SciTech Connect

    Petitpas, Guillaume; Whitesides, Russell

    2016-12-15

    UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.

  14. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  15. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a

  16. Interpreting uncertainty terms.

    PubMed

    Holtgraves, Thomas

    2014-08-01

    Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.

  17. Measurement uncertainty relations

    SciTech Connect

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  18. SU(2) uncertainty limits

    NASA Astrophysics Data System (ADS)

    Shabbir, Saroosh; Björk, Gunnar

    2016-05-01

    Although progress has been made recently in defining nontrivial uncertainty limits for the SU(2) group, a description of the intermediate states bound by these limits remains lacking. In this paper we enumerate possible uncertainty relations for the SU(2) group that involve all three observables and that are, moreover, invariant under SU(2) transformations. We demonstrate that these relations however, even taken as a group, do not provide sharp, saturable bounds. To find sharp bounds, we systematically calculate the variance of the SU(2) operators for all pure states belonging to the N =2 and N =3 polarization excitation manifold (corresponding to spin 1 and spin 3/2). Lastly, and perhaps counter to expectation, we note that even pure states can reach the maximum uncertainty limit.

  19. Uncertainty and Dimensional Calibrations

    PubMed Central

    Doiron, Ted; Stoup, John

    1997-01-01

    The calculation of uncertainty for a measurement is an effort to set reasonable bounds for the measurement result according to standardized rules. Since every measurement produces only an estimate of the answer, the primary requisite of an uncertainty statement is to inform the reader of how sure the writer is that the answer is in a certain range. This report explains how we have implemented these rules for dimensional calibrations of nine different types of gages: gage blocks, gage wires, ring gages, gage balls, roundness standards, optical flats indexing tables, angle blocks, and sieves. PMID:27805114

  20. A Certain Uncertainty

    NASA Astrophysics Data System (ADS)

    Silverman, Mark P.

    2014-07-01

    1. Tools of the trade; 2. The 'fundamental problem' of a practical physicist; 3. Mother of all randomness I: the random disintegration of matter; 4. Mother of all randomness II: the random creation of light; 5. A certain uncertainty; 6. Doing the numbers: nuclear physics and the stock market; 7. On target: uncertainties of projectile flight; 8. The guesses of groups; 9. The random flow of energy I: power to the people; 10. The random flow of energy II: warning from the weather underground; Index.

  1. The legacy of uncertainty

    NASA Technical Reports Server (NTRS)

    Brown, Laurie M.

    1993-01-01

    An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.

  2. Uncertainty and Dimensional Calibrations.

    PubMed

    Doiron, Ted; Stoup, John

    1997-01-01

    The calculation of uncertainty for a measurement is an effort to set reasonable bounds for the measurement result according to standardized rules. Since every measurement produces only an estimate of the answer, the primary requisite of an uncertainty statement is to inform the reader of how sure the writer is that the answer is in a certain range. This report explains how we have implemented these rules for dimensional calibrations of nine different types of gages: gage blocks, gage wires, ring gages, gage balls, roundness standards, optical flats indexing tables, angle blocks, and sieves.

  3. Weighted Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming

    2016-01-01

    Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295

  4. Quantification and Propagation of Nuclear Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Rising, Michael E.

    The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output

  5. Uncertainty in NIST Force Measurements.

    PubMed

    Bartel, Tom

    2005-01-01

    This paper focuses upon the uncertainty of force calibration measurements at the National Institute of Standards and Technology (NIST). The uncertainty of the realization of force for the national deadweight force standards at NIST is discussed, as well as the uncertainties associated with NIST's voltage-ratio measuring instruments and with the characteristics of transducers being calibrated. The combined uncertainty is related to the uncertainty of dissemination for force transfer standards sent to NIST for calibration.

  6. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    SciTech Connect

    Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  7. Uncertainties in repository modeling

    SciTech Connect

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  8. Uncertainties in Transfer Impedance Calculations

    NASA Astrophysics Data System (ADS)

    Schippers, H.; Verpoorte, J.

    2016-05-01

    The shielding effectiveness of metal braids of cables is governed by the geometry and the materials of the braid. The shielding effectiveness can be characterised by the transfer impedance of the metal braid. Analytical models for the transfer impedance contain in general two components, one representing diffusion of electromagnetic energy through the metal braid, and a second part representing leakage of magnetic fields through the braid. Possible sources of uncertainties in the modelling are inaccurate input data (for instance, the exact size of the braid diameter or wire diameter are not known) and imperfections in the computational model. The aim of the present paper is to estimate effects of variations of input data on the calculated transfer impedance.

  9. Strategy under uncertainty.

    PubMed

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  10. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  11. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  12. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  13. A review of uncertainty propagation in orbital mechanics

    NASA Astrophysics Data System (ADS)

    Luo, Ya-zhong; Yang, Zhen

    2017-02-01

    Orbital uncertainty propagation plays an important role in space situational awareness related missions such as tracking and data association, conjunction assessment, sensor resource management and anomaly detection. Linear models and Monte Carlo simulation were primarily used to propagate uncertainties. However, due to the nonlinear nature of orbital dynamics, problems such as low precision and intensive computation have greatly hampered the application of these methods. Aiming at solving these problems, many nonlinear uncertainty propagators have been proposed in the past two decades. To motivate this research area and facilitate the development of orbital uncertainty propagation, this paper summarizes the existing linear and nonlinear uncertainty propagators and their associated applications in the field of orbital mechanics. Frameworks of methods for orbital uncertainty propagation, the advantages and drawbacks of different methods, as well as potential directions for future efforts are also discussed.

  14. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  15. Optimal test selection for prediction uncertainty reduction

    SciTech Connect

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.

  16. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  17. Asymmetric Uncertainty Expression for High Gradient Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T

    2012-01-01

    When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.

  18. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  19. Radar stage uncertainty

    USGS Publications Warehouse

    Fulford, J.M.; Davies, W.J.

    2005-01-01

    The U.S. Geological Survey is investigating the performance of radars used for stage (or water-level) measurement. This paper presents a comparison of estimated uncertainties and data for radar water-level measurements with float, bubbler, and wire weight water-level measurements. The radar sensor was also temperature-tested in a laboratory. The uncertainty estimates indicate that radar measurements are more accurate than uncorrected pressure sensors at higher water stages, but are less accurate than pressure sensors at low stages. Field data at two sites indicate that radar sensors may have a small negative bias. Comparison of field radar measurements with wire weight measurements found that the radar tends to measure slightly lower values as stage increases. Copyright ASCE 2005.

  20. Uncertainties in transpiration estimates.

    PubMed

    Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G

    2014-02-13

    arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).

  1. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range

  2. Aggregating and Communicating Uncertainty.

    DTIC Science & Technology

    1980-04-01

    means for identifying and communicating uncertainty. i 12- APPENDIX A BIBLIOGRAPHY j| 1. Ajzen , Icek ; "Intuitive Theories of Events and the Effects...disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theo- ries of Events and the Effects of Base-Rate Information on Prediction...9 4i,* ,4.. -. .- S % to the criterion while disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theories of Events and the Effects

  3. Uncertainties in climate stabilization

    SciTech Connect

    Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.

    2009-11-01

    We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.

  4. Variants of Uncertainty

    DTIC Science & Technology

    1981-05-15

    Variants of Uncertainty Daniel Kahneman University of British Columbia Amos Tversky Stanford University DTI-C &%E-IECTE ~JUNO 1i 19 8 1j May 15, 1981... Dennett , 1979) in which different parts have ac- cess to different data, assign then different weights and hold different views of the situation...2robable and t..h1 provable. Oxford- Claredor Press, 1977. Dennett , D.C. Brainstorms. Hassocks: Harvester, 1979. Donchin, E., Ritter, W. & McCallum, W.C

  5. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in

  6. Transforming Binary Uncertainties for Robust Speech Recognition

    DTIC Science & Technology

    2006-08-01

    1-0117), an AFRL grant via Veridian and an NSF grant (IIS-0534707). We thank A. Acero and M. L. Seltzer for helpful suggestions. A preliminary...Deng, J. Droppo, and A. Acero , “Dynamic compensation of HMM variances using the feature enhancement uncertainty computed from a parametric model of...amplitude modulation,” IEEE Trans. on Neural Networks, vol. 15, pp. 1135–1150, 2004. [16] X. Huang, A. Acero , and H. Hon, Spoken Language Processing

  7. Planning Under Uncertainty: Methods and Applications

    DTIC Science & Technology

    2010-06-09

    Specifically, in collaboration with Schmedders ( Chicago ) and Judd (Hoover, Stanford) we are looking at issues related to approximating equilibria in...M. Shepard , and M. A. Earl. An optimization framework for conformal radiation treatment planning. IN- FORMS Journal on Computing, 19:366-380, 2007...34Optimization Tools in an Uncertain Environment", Work- shop on Modeling Uncertainty in Integrated Assessment Models, Chicago , July 2008 11. Michael

  8. On estimation of uncertainties in analog measurements

    SciTech Connect

    Adibi, M.M. ); Stovall, J.P. )

    1990-11-01

    Computer control of power systems require evaluation of uncertainties in analog measurements and their reduction to a level that allows satisfactory control. In this paper a range of measurements is obtained from a substation to span peak- and light-load conditions and to include bus voltages, phase angles and line flows. Then the redundancies in measurements are used to formulate several functions relating these measurements with their attending errors. Minimization of these functions have yielded the required corrective coefficients.

  9. Pauli effects in uncertainty relations

    NASA Astrophysics Data System (ADS)

    Toranzo, I. V.; Sánchez-Moreno, P.; Esquivel, R. O.; Dehesa, J. S.

    2014-10-01

    In this Letter we analyze the effect of the spin dimensionality of a physical system in two mathematical formulations of the uncertainty principle: a generalized Heisenberg uncertainty relation valid for all antisymmetric N-fermion wavefunctions, and the Fisher-information-based uncertainty relation valid for all antisymmetric N-fermion wavefunctions of central potentials. The accuracy of these spin-modified uncertainty relations is examined for all atoms from Hydrogen to Lawrencium in a self-consistent framework.

  10. Dimensionality reduction for uncertainty quantification of nuclear engineering models.

    SciTech Connect

    Roderick, O.; Wang, Z.; Anitescu, M.

    2011-01-01

    The task of uncertainty quantification consists of relating the available information on uncertainties in the model setup to the resulting variation in the outputs of the model. Uncertainty quantification plays an important role in complex simulation models of nuclear engineering, where better understanding of uncertainty results in greater confidence in the model and in the improved safety and efficiency of engineering projects. In our previous work, we have shown that the effect of uncertainty can be approximated by polynomial regression with derivatives (PRD): a hybrid regression method that uses first-order derivatives of the model output as additional fitting conditions for a polynomial expansion. Numerical experiments have demonstrated the advantage of this approach over classical methods of uncertainty analysis: in precision, computational efficiency, or both. To obtain derivatives, we used automatic differentiation (AD) on the simulation code; hand-coded derivatives are acceptable for simpler models. We now present improvements on the method. We use a tuned version of the method of snapshots, a technique based on proper orthogonal decomposition (POD), to set up the reduced order representation of essential information on uncertainty in the model inputs. The automatically obtained sensitivity information is required to set up the method. Dimensionality reduction in combination with PRD allows analysis on a larger dimension of the uncertainty space (>100), at modest computational cost.

  11. Uncertainty Quantification in Climate Modeling and Projection

    SciTech Connect

    Qian, Yun; Jackson, Charles; Giorgi, Filippo; Booth, Ben; Duan, Qingyun; Forest, Chris; Higdon, Dave; Hou, Z. Jason; Huerta, Gabriel

    2016-05-01

    The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change information for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for

  12. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  13. Uncertainty in prediction of disinfection performance.

    PubMed

    Neumann, Marc B; von Gunten, Urs; Gujer, Willi

    2007-06-01

    Predicting the disinfection performance of a full-scale reactor in drinking water treatment is associated with considerable uncertainty. In view of quantitative risk analysis, this study assesses the uncertainty involved in predicting inactivation of Cryptosporidium parvum oocysts for an ozone reactor treating lake water. A micromodel is suggested which quantifies inactivation by stochastic sampling from density distributions of ozone exposure and lethal ozone dose. The ozone exposure distribution is computed with a tank in series model that is derived from tracer data and measurements of flow, ozone concentration and ozone decay. The distribution of lethal ozone doses is computed with a delayed Chick-Watson model which was calibrated by Sivaganesan and Marinas [2005. Development of a Ct equation taking into consideration the effect of Lot variability on the inactivation of Cryptosporidium parvum oocysts with ozone. Water Res. 39(11), 2429-2437] utilizing a large number of inactivation studies. Parameter uncertainty is propagated with Monte Carlo simulation and the probability of attaining given inactivation levels is assessed. Regional sensitivity analysis based on variance decomposition ranks the influence of parameters in determining the variance of the model result. The lethal dose model turns out to be responsible for over 90% of the output variance. The entire analysis is re-run for three exemplary scenarios to assess the robustness of the results in view of changing inputs, differing operational parameters or revised assumptions about the appropriate model. We argue that the suggested micromodel is a versatile approach for characterization of disinfection reactors. The scheme developed for uncertainty assessment is optimal for model diagnostics and effectively supports the management of uncertainty.

  14. Satellite altitude determination uncertainties

    NASA Technical Reports Server (NTRS)

    Siry, J. W.

    1972-01-01

    Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.

  15. Uncertainty bounds using sector theory

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Schmidt, David K.

    1989-01-01

    An approach based on sector-stability theory can furnish a description of the uncertainty associated with the frequency response of a model, given sector-bounds on the individual parameters of the model. The application of the sector-based approach to the formulation of useful uncertainty descriptions for linear, time-invariant multivariable systems is presently explored, and the approach is applied to two generic forms of parameter uncertainty in order to investigate its advantages and limitations. The results obtained show that sector-uncertainty bounds can be used to evaluate the impact of parameter uncertainties on the frequency response of the design model.

  16. mu analysis with real parametric uncertainty

    NASA Technical Reports Server (NTRS)

    Young, Peter M.; Newlin, Matthew P.; Doyle, John C.

    1991-01-01

    The authors give a broad overview, from a LFT (linear fractional transformation)/mu perspective, of some of the theoretical and practical issues associated with robustness in the presence of real parametric uncertainty, with a focus on computation. Recent results on the properties of mu in the mixed case are reviewed, including issues of NP completeness, continuity, computation of bounds, the equivalence of mu and its bounds, and some direct comparisons with Kharitonov-type analysis methods. In addition, some advances in the computational aspects of the problem, including a novel branch and bound algorithm, are briefly presented together with numerical results. The results suggest that while the mixed mu problem may have inherently combinatoric worst-case behavior, practical algorithms with modest computational requirements can be developed for problems of medium size (less than 100 parameters) that are of engineering interest.

  17. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  18. Database uncertainty as a limiting factor in reactive transport prognosis

    NASA Astrophysics Data System (ADS)

    Nitzsche, O.; Meinrath, G.; Merkel, B.

    2000-08-01

    The effect of uncertainties in thermodynamic databases on prediction performances of reactive transport modeling of uranium (VI) is investigated with a Monte Carlo approach using the transport code TReaC. TReaC couples the transport model to the speciation code PHREEQC by a particle tracking method. A speciation example is given to illustrate the effect of uncertainty in thermodynamic data on the predicted solution composition. The transport calculations consequently show the prediction uncertainty resulting from uncertainty in thermodynamic data. A conceptually simple scenario of elution of uranium from a sand column is used as an illustrating example. Two different cases are investigated: a carbonate-enriched drinking water and an acid mine water associated with uranium mine remediation problems. Due to the uncertainty in the relative amount of positively charged and neutral solution species, the uncertainty in the thermodynamic data also infers uncertainty in the retardation behavior. The carbonated water system shows the largest uncertainties in speciation calculation. Therefore, the model predictions of total uranium solubility have a broad range. The effect of data uncertainty in transport prediction is further illustrated by a prediction of the time when eluted uranium from the column exceeds a threshold value. All of these Monte Carlo transport calculations consume large amounts of computing time.

  19. Teaching Quantum Uncertainty1

    NASA Astrophysics Data System (ADS)

    Hobson, Art

    2011-10-01

    An earlier paper2 introduces quantum physics by means of four experiments: Youngs double-slit interference experiment using (1) a light beam, (2) a low-intensity light beam with time-lapse photography, (3) an electron beam, and (4) a low-intensity electron beam with time-lapse photography. It's ironic that, although these experiments demonstrate most of the quantum fundamentals, conventional pedagogy stresses their difficult and paradoxical nature. These paradoxes (i.e., logical contradictions) vanish, and understanding becomes simpler, if one takes seriously the fact that quantum mechanics is the nonrelativistic limit of our most accurate physical theory, namely quantum field theory, and treats the Schroedinger wave function, as well as the electromagnetic field, as quantized fields.2 Both the Schroedinger field, or "matter field," and the EM field are made of "quanta"—spatially extended but energetically discrete chunks or bundles of energy. Each quantum comes nonlocally from the entire space-filling field and interacts with macroscopic systems such as the viewing screen by collapsing into an atom instantaneously and randomly in accordance with the probability amplitude specified by the field. Thus, uncertainty and nonlocality are inherent in quantum physics. This paper is about quantum uncertainty. A planned later paper will take up quantum nonlocality.

  20. Antarctic Photochemistry: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.; McConnell, Joseph R.

    1999-01-01

    Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.

  1. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  2. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  3. The Neural Representation of Unexpected Uncertainty During Value-Based Decision Making

    PubMed Central

    Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P.

    2016-01-01

    Summary Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled each form of uncertainty to be separately measured. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. PMID:23849203

  4. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  5. Environmental adversity and uncertainty favour cooperation

    PubMed Central

    Andras, Peter; Lazarus, John; Roberts, Gilbert

    2007-01-01

    Background A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world – with examples from alpine plants, worms, fish, mole-rats, monkeys and humans – for cooperation to flourish where the environment is more adverse (harsher) or more unpredictable. Results Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. Conclusion This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies. PMID:18053138

  6. On the worst case uncertainty and its evaluation

    NASA Astrophysics Data System (ADS)

    Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.

    2016-02-01

    The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.

  7. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  8. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  9. Theoretical Analysis of Positional Uncertainty in Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Coskun Kiraci, Ali; Toz, Gonul

    2016-10-01

    GNSS/INS system composed of Global Navigation Satellite System and Inertial Navigation System together can provide orientation parameters directly by the observations collected during the flight. Thus orientation parameters can be obtained by GNSS/INS integration process without any need for aero triangulation after the flight. In general, positional uncertainty can be estimated with known coordinates of Ground Control Points (GCP) which require field works such as marker construction and GNSS measurement leading additional cost to the project. Here the question arises what should be the theoretical uncertainty of point coordinates depending on the uncertainties of orientation parameters. In this study the contribution of each orientation parameter on positional uncertainty is examined and theoretical positional uncertainty is computed without GCP measurement for direct georeferencing using a graphical user interface developed in MATLAB.

  10. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  11. Pragmatic aspects of uncertainty propagation: A conceptual review

    NASA Astrophysics Data System (ADS)

    Thacker, W. Carlisle; Iskandarani, Mohamed; Gonçalves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar M.

    2015-11-01

    When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and (ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.

  12. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  13. Uncertainty relation in Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng

    2015-04-01

    We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 ⁡ c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  14. DOD ELAP Lab Uncertainties

    DTIC Science & Technology

    2012-03-01

    IPV6, NLLAP, NEFAP  TRAINING Programs  Certification Bodies – ISO /IEC 17021  Accreditation for  Management  System  Certification Bodies that...certify to :  ISO   9001  (QMS),  ISO  14001 (EMS),   TS 16949 (US Automotive)  etc. 2 3 DoD QSM 4.2 standard   ISO /IEC 17025:2005  Each has uncertainty...NOTES Presented at the 9th Annual DoD Environmental Monitoring and Data Quality (EDMQ) Workshop Held 26-29 March 2012 in La Jolla, CA. U.S

  15. Generalized uncertainty relations

    NASA Astrophysics Data System (ADS)

    Herdegen, Andrzej; Ziobro, Piotr

    2017-04-01

    The standard uncertainty relations (UR) in quantum mechanics are typically used for unbounded operators (like the canonical pair). This implies the need for the control of the domain problems. On the other hand, the use of (possibly bounded) functions of basic observables usually leads to more complex and less readily interpretable relations. In addition, UR may turn trivial for certain states if the commutator of observables is not proportional to a positive operator. In this letter we consider a generalization of standard UR resulting from the use of two, instead of one, vector states. The possibility to link these states to each other in various ways adds additional flexibility to UR, which may compensate some of the above-mentioned drawbacks. We discuss applications of the general scheme, leading not only to technical improvements, but also to interesting new insight.

  16. Uncertainty as Certaint

    NASA Astrophysics Data System (ADS)

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  17. Medical decisions under uncertainty.

    PubMed

    Carmi, A

    1993-01-01

    The court applies the criteria of the reasonable doctor and common practice in order to consider the behaviour of a defendant physician. The meaning of our demand that the doctor expects that his or her acts or omissions will bring about certain implications is that, according to the present circumstances and subject to the limited knowledge of the common practice, the course of certain events or situations in the future may be assumed in spite of the fog of uncertainty which surrounds us. The miracles and wonders of creation are concealed from us, and we are not aware of the way and the nature of our bodily functioning. Therefore, there seems to be no way to avoid mistakes, because in several cases the correct diagnosis cannot be determined even with the most advanced application of all information available. Doctors find it difficult to admit that they grope in the dark. They wish to form clear and accurate diagnoses for their patients. The fact that their profession is faced with innumerable and unavoidable risks and mistakes is hard to swallow, and many of them claim that in their everyday work this does not happen. They should not content themselves by changing their style. A radical metamorphosis is needed. They should not be tempted to formulate their diagnoses in 'neutral' statements in order to be on the safe side. Uncertainty should be accepted and acknowledged by the profession and by the public at large as a human phenomenon, as an integral part of any human decision, and as a clear characteristic of any legal or medical diagnosis.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Impact of discharge data uncertainty on nutrient load uncertainty

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars

    2016-04-01

    Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.

  19. Uncertainty quantification in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Rizzi, Francesco

    This dissertation focuses on uncertainty quantification (UQ) in molecular dynamics (MD) simulations. The application of UQ to molecular dynamics is motivated by the broad uncertainty characterizing MD potential functions and by the complexity of the MD setting, where even small uncertainties can be amplified to yield large uncertainties in the model predictions. Two fundamental, distinct sources of uncertainty are investigated in this work, namely parametric uncertainty and intrinsic noise. Intrinsic noise is inherently present in the MD setting, due to fluctuations originating from thermal effects. Averaging methods can be exploited to reduce the fluctuations, but due to finite sampling, this effect cannot be completely filtered, thus yielding a residual uncertainty in the MD predictions. Parametric uncertainty, on the contrary, is introduced in the form of uncertain potential parameters, geometry, and/or boundary conditions. We address the UQ problem in both its main components, namely the forward propagation, which aims at characterizing how uncertainty in model parameters affects selected observables, and the inverse problem, which involves the estimation of target model parameters based on a set of observations. The dissertation highlights the challenges arising when parametric uncertainty and intrinsic noise combine to yield non-deterministic, noisy MD predictions of target macroscale observables. Two key probabilistic UQ methods, namely Polynomial Chaos (PC) expansions and Bayesian inference, are exploited to develop a framework that enables one to isolate the impact of parametric uncertainty on the MD predictions and, at the same time, properly quantify the effect of the intrinsic noise. Systematic applications to a suite of problems of increasing complexity lead to the observation that an uncertain PC representation built via Bayesian regression is the most suitable model for the representation of uncertain MD predictions of target observables in the

  20. Uncertainty and Surprise: An Introduction

    NASA Astrophysics Data System (ADS)

    McDaniel, Reuben R.; Driebe, Dean J.

    Much of the traditional scientific and applied scientific work in the social and natural sciences has been built on the supposition that the unknowability of situations is the result of a lack of information. This has led to an emphasis on uncertainty reduction through ever-increasing information seeking and processing, including better measurement and observational instrumentation. Pending uncertainty reduction through better information, efforts are devoted to uncertainty management and hierarchies of controls. A central goal has been the avoidance of surprise.

  1. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  2. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  3. Uncertainty quantification for porous media flows

    SciTech Connect

    Christie, Mike . E-mail: mike.christie@pet.hw.ac.uk; Demyanov, Vasily; Erbas, Demet

    2006-09-01

    Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found.

  4. Back to the future: The Grassroots of Hydrological Uncertainty

    NASA Astrophysics Data System (ADS)

    Smith, K. A.

    2013-12-01

    Uncertainties are widespread within hydrological science, and as society is looking to models to provide answers as to how climate change may affect our future water resources, the performance of hydrological models should be evaluated. With uncertainties being introduced from input data, parameterisation, model structure, validation data, and ';unknown unknowns' it is easy to be pessimistic about model outputs. But uncertainties are an opportunity for scientific endeavour, not a threat. Investigation and suitable presentation of uncertainties, which results in a range of potential outcomes, provides more insight into model projections than just one answer. This paper aims to demonstrate the feasibility of conducting computationally demanding parameter uncertainty estimation experiments on global hydrological models (GHMs). Presently, individual GHMs tend to present their one, best projection, but this leads to spurious precision - a false impression of certainty - which can be misleading to decision makers. Whilst uncertainty estimation is firmly established in catchment hydrology, GHM uncertainty, and parameter uncertainty in particular, has remained largely overlooked. Model inter-comparison studies that investigate model structure uncertainty have been undertaken (e.g. ISI-MIP, EU-WATCH etc.), but these studies seem premature when the uncertainties within each individual model itself have not yet been considered. This study takes a few steps back, going down to one of the first introductions of assumptions in model development, the assignment of model parameter values. Making use of the University of Nottingham's High Performance Computer Cluster (HPC), the Mac-PDM.09 GHM has been subjected to rigorous uncertainty experiments. The Generalised Likelihood Uncertainty Estimation method (GLUE) with Latin Hypercube Sampling has been applied to a GHM for the first time, to produce 100,000 simultaneous parameter perturbations. The results of this ensemble of 100

  5. Higher-order uncertainty relations

    NASA Astrophysics Data System (ADS)

    Wünsche, A.

    2006-07-01

    Using the non-negativity of Gram determinants of arbitrary order, we derive higher-order uncertainty relations for the symmetric uncertainty matrices of corresponding order n?>?2 to n Hermitean operators (n?=?2 is the usual case). The special cases of third-order and fourth-order uncertainty relations are considered in detail. The obtained third-order uncertainty relations are applied to the Lie groups SU(1,1) with three Hermitean basis operators (K1,K2,K0) and SU(2) with three Hermitean basis operators (J1,J2,J3) where, in particular, the group-coherent states of Perelomov type and of Barut Girardello type for SU(1,1) and the spin or atomic coherent states for SU(2) are investigated. The uncertainty relations for the determinant of the third-order uncertainty matrix are satisfied with the equality sign for coherent states and this determinant becomes vanishing for the Perelomov type of coherent states for SU(1,1) and SU(2). As an example of the application of fourth-order uncertainty relations, we consider the canonical operators (Q1,P1,Q2,P2) of two boson modes and the corresponding uncertainty matrix formed by the operators of the corresponding mean deviations, taking into account the correlations between the two modes. In two mathematical appendices, we prove the non-negativity of the determinant of correlation matrices of arbitrary order and clarify the principal structure of higher-order uncertainty relations.

  6. Simplified propagation of standard uncertainties

    SciTech Connect

    Shull, A.H.

    1997-06-09

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards` uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper.

  7. Quantitative Assessment of Parametric Uncertainty in Northern Hemisphere PAH Concentrations.

    PubMed

    Thackray, Colin P; Friedman, Carey L; Zhang, Yanxu; Selin, Noelle E

    2015-08-04

    We quantitatively examine the relative importance of uncertainty in emissions and physicochemical properties (including reaction rate constants) to Northern Hemisphere (NH) and Arctic polycyclic aromatic hydrocarbon (PAH) concentrations, using a computationally efficient numerical uncertainty technique applied to the global-scale chemical transport model GEOS-Chem. Using polynomial chaos (PC) methods, we propagate uncertainties in physicochemical properties and emissions for the PAHs benzo[a]pyrene, pyrene and phenanthrene to simulated spatially resolved concentration uncertainties. We find that the leading contributors to parametric uncertainty in simulated concentrations are the black carbon-air partition coefficient and oxidation rate constant for benzo[a]pyrene, and the oxidation rate constants for phenanthrene and pyrene. NH geometric average concentrations are more sensitive to uncertainty in the atmospheric lifetime than to emissions rate. We use the PC expansions and measurement data to constrain parameter uncertainty distributions to observations. This narrows a priori parameter uncertainty distributions for phenanthrene and pyrene, and leads to higher values for OH oxidation rate constants and lower values for European PHE emission rates.

  8. Controllable set analysis for planetary landing under model uncertainties

    NASA Astrophysics Data System (ADS)

    Long, Jiateng; Gao, Ai; Cui, Pingyuan

    2015-07-01

    Controllable set analysis is a beneficial method in planetary landing mission design by feasible entry state selection in order to achieve landing accuracy and satisfy entry path constraints. In view of the severe impact of model uncertainties on planetary landing safety and accuracy, the purpose of this paper is to investigate the controllable set under uncertainties between on-board model and the real situation. Controllable set analysis under model uncertainties is composed of controllable union set (CUS) analysis and controllable intersection set (CIS) analysis. Definitions of CUS and CIS are demonstrated and computational method of them based on Gauss pseudospectral method is presented. Their applications on entry states distribution analysis under uncertainties and robustness of nominal entry state selection to uncertainties are illustrated by situations with ballistic coefficient, lift-to-drag ratio and atmospheric uncertainty in Mars entry. With analysis of CUS and CIS, the robustness of entry state selection and entry trajectory to model uncertainties can be guaranteed, thus enhancing the safety, reliability and accuracy under model uncertainties during planetary entry and landing.

  9. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  10. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  11. Are models, uncertainty, and dispute resolution compatible?

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Wilson, J. L.

    2013-12-01

    Models and their uncertainty often move from an objective use in planning and decision making into the regulatory environment, then sometimes on to dispute resolution through litigation or other legal forums. Through this last transition whatever objectivity the models and uncertainty assessment may have once possessed becomes biased (or more biased) as each party chooses to exaggerate either the goodness of a model, or its worthlessness, depending on which view is in its best interest. If worthlessness is desired, then what was uncertain becomes unknown, or even unknowable. If goodness is desired, then precision and accuracy are often exaggerated and uncertainty, if it is explicitly recognized, encompasses only some parameters or conceptual issues, ignores others, and may minimize the uncertainty that it accounts for. In dispute resolution, how well is the adversarial process able to deal with these biases? The challenge is that they are often cloaked in computer graphics and animations that appear to lend realism to what could be mostly fancy, or even a manufactured outcome. While junk science can be challenged through appropriate motions in federal court, and in most state courts, it not unusual for biased or even incorrect modeling results, or conclusions based on incorrect results, to be permitted to be presented at trial. Courts allow opinions that are based on a "reasonable degree of scientific certainty," but when that 'certainty' is grossly exaggerated by an expert, one way or the other, how well do the courts determine that someone has stepped over the line? Trials are based on the adversary system of justice, so opposing and often irreconcilable views are commonly allowed, leaving it to the judge or jury to sort out the truth. Can advances in scientific theory and engineering practice, related to both modeling and uncertainty, help address this situation and better ensure that juries and judges see more objective modeling results, or at least see

  12. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  13. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  14. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  15. Uncertainties in Arctic Precipitation

    NASA Astrophysics Data System (ADS)

    Majhi, I.; Alexeev, V. A.; Cherry, J. E.; Cohen, J. L.; Groisman, P. Y.

    2012-12-01

    Arctic precipitation is riddled with measurement biases; to address the problem is imperative. Our study focuses on comparison of various datasets and analyzing their biases for the region of Siberia and caution that is needed when using them. Five sources of data were used ranging from NOAA's product (RAW, Bogdanova's correction), Yang's correction technique and two reanalysis products (ERA-Interim and NCEP). The reanalysis dataset performed better for some months in comparison to Yang's product, which tends to overestimate precipitation, and the raw dataset, which tends to underestimate. The sources of bias vary from topography, to wind, to missing data .The final three products chosen show higher biases during the winter and spring season. Emphasis on equations which incorporate blizzards, blowing snow and higher wind speed is necessary for regions which are influenced by any or all of these factors; Bogdanova's correction technique is the most robust of all the datasets analyzed and gives the most reasonable results. One of our future goals is to analyze the impact of precipitation uncertainties on water budget analysis for the Siberian Rivers.

  16. Pandemic influenza: certain uncertainties

    PubMed Central

    Morens, David M.; Taubenberger, Jeffery K.

    2011-01-01

    SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672

  17. Uncertainties in nuclear fission data

    NASA Astrophysics Data System (ADS)

    Talou, Patrick; Kawano, Toshihiko; Chadwick, Mark B.; Neudecker, Denise; Rising, Michael E.

    2015-03-01

    We review the current status of our knowledge of nuclear fission data, and quantify uncertainties related to each fission observable whenever possible. We also discuss the roles that theory and experiment play in reducing those uncertainties, contributing to the improvement of our fundamental understanding of the nuclear fission process as well as of evaluated nuclear data libraries used in nuclear applications.

  18. Mama Software Features: Uncertainty Testing

    SciTech Connect

    Ruggiero, Christy E.; Porter, Reid B.

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  19. Research strategies for addressing uncertainties

    USGS Publications Warehouse

    Busch, David E.; Brekke, Levi D.; Averyt, Kristen; Jardine, Angela; Welling, Leigh; Garfin, Gregg; Jardine, Angela; Merideth, Robert; Black, Mary; LeRoy, Sarah

    2013-01-01

    Research Strategies for Addressing Uncertainties builds on descriptions of research needs presented elsewhere in the book; describes current research efforts and the challenges and opportunities to reduce the uncertainties of climate change; explores ways to improve the understanding of changes in climate and hydrology; and emphasizes the use of research to inform decision making.

  20. Uncertainty in Integrated Assessment Scenarios

    SciTech Connect

    Mort Webster

    2005-10-17

    The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean

  1. Inverse covariance simplification for efficient uncertainty management

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Gutiérrez, J. A.

    2007-11-01

    When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it is crucial to handle them in such a way that their interpretation, or their use in subsequent processing steps, remain user-friendly and computationally tractable. A few authors follow a Bayesian approach and provide uncertainties as an inverse covariance matrix. Despite its apparent sparsity, this matrix contains many small terms that carry little information. Methods have been developed to select the most significant entries, through the use of information-theoretic tools for instance. One has to find a Gaussian pdf that is close enough to the posterior pdf, and with a small number of non-zero coefficients in the inverse covariance matrix. We propose to restrict the search space to Markovian models (where only neighbors can interact), well-suited to signals or images. The originality of our approach is in conserving the covariances between neighbors while setting to zero the entries of the inverse covariance matrix for all other variables. This fully constrains the solution, and the computation is performed via a fast, alternate minimization scheme involving quadratic forms. The Markovian structure advantageously reduces the complexity of Bayesian updating (where the simplified pdf is used as a prior). Moreover, uncertainties exhibit the same temporal or spatial structure as the data.

  2. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  3. UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I

    SciTech Connect

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-06-10

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].

  4. Equivalence theorem of uncertainty relations

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2017-01-01

    We present an equivalence theorem to unify the two classes of uncertainty relations, i.e. the variance-based ones and the entropic forms, showing that the entropy of an operator in a quantum system can be built from the variances of a set of commutative operators. This means that an uncertainty relation in the language of entropy may be mapped onto a variance-based one, and vice versa. Employing the equivalence theorem, alternative formulations of entropic uncertainty relations are obtained for the qubit system that are stronger than the existing ones in the literature, and variance-based uncertainty relations for spin systems are reached from the corresponding entropic uncertainty relations.

  5. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-03

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  6. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  7. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  8. Understanding and reducing statistical uncertainties in nebular abundance determinations

    NASA Astrophysics Data System (ADS)

    Wesson, R.; Stock, D. J.; Scicluna, P.

    2012-06-01

    Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.

  9. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    SciTech Connect

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-06-28

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods.

  10. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    SciTech Connect

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.

  11. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  12. Design of forging process variables under uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2005-02-01

    Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.

  13. Fluid flow dynamics under location uncertainty

    NASA Astrophysics Data System (ADS)

    Mémin, Etienne

    2014-03-01

    We present a derivation of a stochastic model of Navier Stokes equations that relies on a decomposition of the velocity fields into a differentiable drift component and a time uncorrelated uncertainty random term. This type of decomposition is reminiscent in spirit to the classical Reynolds decomposition. However, the random velocity fluctuations considered here are not differentiable with respect to time, and they must be handled through stochastic calculus. The dynamics associated with the differentiable drift component is derived from a stochastic version of the Reynolds transport theorem. It includes in its general form an uncertainty dependent "subgrid" bulk formula that cannot be immediately related to the usual Boussinesq eddy viscosity assumption constructed from thermal molecular agitation analogy. This formulation, emerging from uncertainties on the fluid parcels location, explains with another viewpoint some subgrid eddy diffusion models currently used in computational fluid dynamics or in geophysical sciences and paves the way for new large-scales flow modelling. We finally describe an applications of our formalism to the derivation of stochastic versions of the Shallow water equations or to the definition of reduced order dynamical systems.

  14. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions

  15. Uncertainties of Mayak urine data

    SciTech Connect

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3 to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.

  16. Shock Layer Radiation Modeling and Uncertainty for Mars Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth

    2012-01-01

    A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the

  17. Measurement uncertainty of lactase-containing tablets analyzed with FTIR.

    PubMed

    Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia

    2014-01-01

    Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve <5.0% expanded uncertainty level. Uncertainty arising from the FTIR instrument was <1.0%. The uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.

  18. The uncertainty of counting at a defined solid angle

    NASA Astrophysics Data System (ADS)

    Pommé, S.

    2015-06-01

    Specific uncertainty components of counting at a defined solid angle are discussed. It is potentially an extremely accurate technique for primary standardisation of activity of alpha emitters and low-energy x-ray emitters. Owing to its reproducibility, it is very well suited for half-life measurements. Considered sources of uncertainty are 1) source-detector geometry, 2) solid-angle calculation, 3) energy loss and self-absorption, 4) scattering, 5) detection efficiency. Other sources of uncertainty, such as source weighing, counting, dead time and decay data are common to other standardisation methods. Statistical uncertainty propagation formulas are presented for the solid angle subtended by a circular detector to radioactive sources. Computer simulations were performed to investigate aspects of particle scattering.

  19. Analysis and Reduction of Complex Networks Under Uncertainty

    SciTech Connect

    Knio, Omar M

    2014-04-09

    This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.

  20. Uncertainty relations for characteristic functions

    NASA Astrophysics Data System (ADS)

    Rudnicki, Łukasz; Tasca, D. S.; Walborn, S. P.

    2016-02-01

    We present the uncertainty relation for the characteristic functions (ChUR) of the quantum mechanical position and momentum probability distributions. This inequality is more general than the Heisenberg uncertainty relation and is saturated in two extreme cases for wave functions described by periodic Dirac combs. We further discuss a broad spectrum of applications of the ChUR; in particular, we constrain quantum optical measurements involving general detection apertures and provide the uncertainty relation that is relevant for loop quantum cosmology. A method to measure the characteristic function directly using an auxiliary qubit is also briefly discussed.

  1. Assessing uncertainty in physical constants

    NASA Astrophysics Data System (ADS)

    Henrion, Max; Fischhoff, Baruch

    1986-09-01

    Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.

  2. Uncertainty quantification of squeal instability via surrogate modelling

    NASA Astrophysics Data System (ADS)

    Nobari, Amir; Ouyang, Huajiang; Bannister, Paul

    2015-08-01

    One of the major issues that car manufacturers are facing is the noise and vibration of brake systems. Of the different sorts of noise and vibration, which a brake system may generate, squeal as an irritating high-frequency noise costs the manufacturers significantly. Despite considerable research that has been conducted on brake squeal, the root cause of squeal is still not fully understood. The most common assumption, however, is mode-coupling. Complex eigenvalue analysis is the most widely used approach to the analysis of brake squeal problems. One of the major drawbacks of this technique, nevertheless, is that the effects of variability and uncertainty are not included in the results. Apparently, uncertainty and variability are two inseparable parts of any brake system. Uncertainty is mainly caused by friction, contact, wear and thermal effects while variability mostly stems from the manufacturing process, material properties and component geometries. Evaluating the effects of uncertainty and variability in the complex eigenvalue analysis improves the predictability of noise propensity and helps produce a more robust design. The biggest hurdle in the uncertainty analysis of brake systems is the computational cost and time. Most uncertainty analysis techniques rely on the results of many deterministic analyses. A full finite element model of a brake system typically consists of millions of degrees-of-freedom and many load cases. Running time of such models is so long that automotive industry is reluctant to do many deterministic analyses. This paper, instead, proposes an efficient method of uncertainty propagation via surrogate modelling. A surrogate model of a brake system is constructed in order to reproduce the outputs of the large-scale finite element model and overcome the issue of computational workloads. The probability distribution of the real part of an unstable mode can then be obtained by using the surrogate model with a massive saving of

  3. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2016-10-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common. The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints. We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  4. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections.

  5. Uncertainty quantification in reacting flow modeling.

    SciTech Connect

    Le MaÒitre, Olivier P.; Reagan, Matthew T.; Knio, Omar M.; Ghanem, Roger Georges; Najm, Habib N.

    2003-10-01

    Uncertainty quantification (UQ) in the computational modeling of physical systems is important for scientific investigation, engineering design, and model validation. In this work we develop techniques for UQ based on spectral and pseudo-spectral polynomial chaos (PC) expansions, and we apply these constructions in computations of reacting flow. We develop and compare both intrusive and non-intrusive spectral PC techniques. In the intrusive construction, the deterministic model equations are reformulated using Galerkin projection into a set of equations for the time evolution of the field variable PC expansion mode strengths. The mode strengths relate specific parametric uncertainties to their effects on model outputs. The non-intrusive construction uses sampling of many realizations of the original deterministic model, and projects the resulting statistics onto the PC modes, arriving at the PC expansions of the model outputs. We investigate and discuss the strengths and weaknesses of each approach, and identify their utility under different conditions. We also outline areas where ongoing and future research are needed to address challenges with both approaches.

  6. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  7. Dynamical Realism and Uncertainty Propagation

    NASA Astrophysics Data System (ADS)

    Park, Inkwan

    In recent years, Space Situational Awareness (SSA) has become increasingly important as the number of tracked Resident Space Objects (RSOs) continues their growth. One of the most significant technical discussions in SSA is how to propagate state uncertainty in a consistent way with the highly nonlinear dynamical environment. In order to keep pace with this situation, various methods have been proposed to propagate uncertainty accurately by capturing the nonlinearity of the dynamical system. We notice that all of the methods commonly focus on a way to describe the dynamical system as precisely as possible based on a mathematical perspective. This study proposes a new perspective based on understanding dynamics of the evolution of uncertainty itself. We expect that profound insights of the dynamical system could present the possibility to develop a new method for accurate uncertainty propagation. These approaches are naturally concluded in goals of the study. At first, we investigate the most dominant factors in the evolution of uncertainty to realize the dynamical system more rigorously. Second, we aim at developing the new method based on the first investigation enabling orbit uncertainty propagation efficiently while maintaining accuracy. We eliminate the short-period variations from the dynamical system, called a simplified dynamical system (SDS), to investigate the most dominant factors. In order to achieve this goal, the Lie transformation method is introduced since this transformation can define the solutions for each variation separately. From the first investigation, we conclude that the secular variations, including the long-period variations, are dominant for the propagation of uncertainty, i.e., short-period variations are negligible. Then, we develop the new method by combining the SDS and the higher-order nonlinear expansion method, called state transition tensors (STTs). The new method retains advantages of the SDS and the STTs and propagates

  8. Thermodynamic and relativistic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Artamonov, A. A.; Plotnikov, E. M.

    2017-01-01

    Thermodynamic uncertainty relation (UR) was verified experimentally. The experiments have shown the validity of the quantum analogue of the zeroth law of stochastic thermodynamics in the form of the saturated Schrödinger UR. We have also proposed a new type of UR for the relativistic mechanics. These relations allow us to consider macroscopic phenomena within the limits of the ratio of the uncertainty relations for different physical quantities.

  9. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  10. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  11. Wildfire Decision Making Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Thompson, M.

    2013-12-01

    Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.

  12. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  13. Principals' Sense of Uncertainty and Organizational Learning Mechanisms

    ERIC Educational Resources Information Center

    Schechter, Chen; Asher, Neomi

    2012-01-01

    Purpose: The purpose of the present study is to examine the effect of principals' sense of uncertainty on organizational learning mechanisms (OLMs) in schools. Design/methodology/approach: Data were collected from 130 school principals (90 women and 40 men) from both Tel-Aviv and Central districts in Israel. After computing the correlation between…

  14. Propagation of Uncertainty in System Parameters of a LWR Model by Sampling MCNPX Calculations - Burnup Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.

    2014-06-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.

  15. A comparative study of new non-linear uncertainty propagation methods for space surveillance

    NASA Astrophysics Data System (ADS)

    Horwood, Joshua T.; Aristoff, Jeffrey M.; Singh, Navraj; Poore, Aubrey B.

    2014-06-01

    We propose a unified testing framework for assessing uncertainty realism during non-linear uncertainty propagation under the perturbed two-body problem of celestial mechanics, with an accompanying suite of metrics and benchmark test cases on which to validate different methods. We subsequently apply the testing framework to different combinations of uncertainty propagation techniques and coordinate systems for representing the uncertainty. In particular, we recommend the use of a newly-derived system of orbital element coordinates that mitigate the non-linearities in uncertainty propagation and the recently-developed Gauss von Mises filter which, when used in tandem, provide uncertainty realism over much longer periods of time compared to Gaussian representations of uncertainty in Cartesian spaces, at roughly the same computational cost.

  16. Position-momentum uncertainty relations based on moments of arbitrary order

    SciTech Connect

    Zozor, Steeve; Portesi, Mariela; Sanchez-Moreno, Pablo; Dehesa, Jesus S.

    2011-05-15

    The position-momentum uncertainty-like inequality based on moments of arbitrary order for d-dimensional quantum systems, which is a generalization of the celebrated Heisenberg formulation of the uncertainty principle, is improved here by use of the Renyi-entropy-based uncertainty relation. The accuracy of the resulting lower bound is physico-computationally analyzed for the two main prototypes in d-dimensional physics: the hydrogenic and oscillator-like systems.

  17. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix

  18. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    SciTech Connect

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  19. Intolerance of uncertainty in emotional disorders: What uncertainties remain?

    PubMed

    Shihata, Sarah; McEvoy, Peter M; Mullan, Barbara Ann; Carleton, R Nicholas

    2016-06-01

    The current paper presents a future research agenda for intolerance of uncertainty (IU), which is a transdiagnostic risk and maintaining factor for emotional disorders. In light of the accumulating interest and promising research on IU, it is timely to emphasize the theoretical and therapeutic significance of IU, as well as to highlight what remains unknown about IU across areas such as development, assessment, behavior, threat and risk, and relationships to cognitive vulnerability factors and emotional disorders. The present paper was designed to provide a synthesis of what is known and unknown about IU, and, in doing so, proposes broad and novel directions for future research to address the remaining uncertainties in the literature.

  20. Extending BEAMS to incorporate correlated systematic uncertainties

    SciTech Connect

    Knights, Michelle; Bassett, Bruce A.; Varughese, Melvin; Newling, James; Hlozek, Renée; Kunz, Martin; Smith, Mat E-mail: bruce@saao.ac.za E-mail: renee.hlozek@gmail.com E-mail: matsmith2@gmail.com

    2013-01-01

    New supernova surveys such as the Dark Energy Survey, Pan-STARRS and the LSST will produce an unprecedented number of photometric supernova candidates, most with no spectroscopic data. Avoiding biases in cosmological parameters due to the resulting inevitable contamination from non-Ia supernovae can be achieved with the BEAMS formalism, allowing for fully photometric supernova cosmology studies. Here we extend BEAMS to deal with the case in which the supernovae are correlated by systematic uncertainties. The analytical form of the full BEAMS posterior requires evaluating 2{sup N} terms, where N is the number of supernova candidates. This 'exponential catastrophe' is computationally unfeasible even for N of order 100. We circumvent the exponential catastrophe by marginalising numerically instead of analytically over the possible supernova types: we augment the cosmological parameters with nuisance parameters describing the covariance matrix and the types of all the supernovae, τ{sub i}, that we include in our MCMC analysis. We show that this method deals well even with large, unknown systematic uncertainties without a major increase in computational time, whereas ignoring the correlations can lead to significant biases and incorrect credible contours. We then compare the numerical marginalisation technique with a perturbative expansion of the posterior based on the insight that future surveys will have exquisite light curves and hence the probability that a given candidate is a Type Ia will be close to unity or zero, for most objects. Although this perturbative approach changes computation of the posterior from a 2{sup N} problem into an N{sup 2} or N{sup 3} one, we show that it leads to biases in general through a small number of misclassifications, implying that numerical marginalisation is superior.

  1. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2015-11-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from [Berta et al., Nat. Phys. 6, 659 (2010)] is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the "uncertainty witness" lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from [Coles et al., Phys. Rev. Lett. 108, 210405 (2012)] makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM Quantum Experience and find reasonable agreement between our predictions and experimental outcomes.

  2. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2016-07-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.

  3. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  4. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    SciTech Connect

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  5. Users manual for the FORSS sensitivity and uncertainty analysis code system

    SciTech Connect

    Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.

    1981-01-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.

  6. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  7. Sub-Heisenberg phase uncertainties

    NASA Astrophysics Data System (ADS)

    Pezzé, Luca

    2013-12-01

    Phase shift estimation with uncertainty below the Heisenberg limit, ΔϕHL∝1/N¯T, where N¯T is the total average number of particles employed, is a mirage of linear quantum interferometry. Recently, Rivas and Luis, [New J. Phys.NJOPFM1367-263010.1088/1367-2630/14/9/093052 14, 093052 (2012)] proposed a scheme to achieve a phase uncertainty Δϕ∝1/N¯Tk, with k an arbitrary exponent. This sparked an intense debate in the literature which, ultimately, does not exclude the possibility to overcome ΔϕHL at specific phase values. Our numerical analysis of the Rivas and Luis proposal shows that sub-Heisenberg uncertainties are obtained only when the estimator is strongly biased. No violation of the Heisenberg limit is found after bias correction or when using a bias-free Bayesian analysis.

  8. Climate negotiations under scientific uncertainty

    PubMed Central

    Barrett, Scott; Dannenberg, Astrid

    2012-01-01

    How does uncertainty about “dangerous” climate change affect the prospects for international cooperation? Climate negotiations usually are depicted as a prisoners’ dilemma game; collectively, countries are better off reducing their emissions, but self-interest impels them to keep on emitting. We provide experimental evidence, grounded in an analytical framework, showing that the fear of crossing a dangerous threshold can turn climate negotiations into a coordination game, making collective action to avoid a dangerous threshold virtually assured. These results are robust to uncertainty about the impact of crossing a threshold, but uncertainty about the location of the threshold turns the game back into a prisoners’ dilemma, causing cooperation to collapse. Our research explains the paradox of why countries would agree to a collective goal, aimed at reducing the risk of catastrophe, but act as if they were blind to this risk. PMID:23045685

  9. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  10. Survey and Evaluate Uncertainty Quantification Methodologies

    SciTech Connect

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  11. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    SciTech Connect

    Safta, Cosmin; Najm, Habib N.; Phipps, Eric Todd

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  12. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  13. Uncertainty quantification in lattice QCD calculations for nuclear physics

    SciTech Connect

    Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.

    2015-02-05

    The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.

  14. Probabilistic simulation of uncertainties in composite uniaxial strengths

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Stock, T. A.

    1990-01-01

    Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.

  15. Statistics, Uncertainty, and Transmitted Variation

    SciTech Connect

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  16. Uncertainties in offsite consequence analysis

    SciTech Connect

    Young, M.L.; Harper, F.T.; Lui, C.H.

    1996-03-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.

  17. Awe, uncertainty, and agency detection.

    PubMed

    Valdesolo, Piercarlo; Graham, Jesse

    2014-01-01

    Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events.

  18. Systemic change increases model projection uncertainty

    NASA Astrophysics Data System (ADS)

    Verstegen, Judith; Karssenberg, Derek; van der Hilst, Floor; Faaij, André

    2014-05-01

    the neighbourhood doubled, while the influence of slope and potential yield decreased by 75% and 25% respectively. Allowing these systemic changes to occur in our CA in the future (up to 2022) resulted in an increase in model projection uncertainty by a factor two compared to the assumption of a stationary system. This means that the assumption of a constant model structure is not adequate and largely underestimates uncertainty in the projection. References Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53, 121-136. Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2012. Spatio-Temporal Uncertainty in Spatial Decision Support Systems: a Case Study of Changing Land Availability for Bioenergy Crops in Mozambique. Computers , Environment and Urban Systems 36, 30-42. Wald, A., Wolfowitz, J., 1940. On a test whether two samples are from the same population. The Annals of Mathematical Statistics 11, 147-162.

  19. SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations

    PubMed Central

    2013-01-01

    Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408

  20. I Am Sure There May Be a Planet There: Student articulation of uncertainty in argumentation tasks

    NASA Astrophysics Data System (ADS)

    Buck, Zoë E.; Lee, Hee-Sun; Flores, Joanna

    2014-09-01

    We investigated how students articulate uncertainty when they are engaged in structured scientific argumentation tasks where they generate, examine, and interpret data to determine the existence of exoplanets. In this study, 302 high school students completed 4 structured scientific arguments that followed a series of computer-model-based curriculum module activities simulating the radial velocity and/or the transit method. Structured scientific argumentation tasks involved claim, explanation, uncertainty rating, and uncertainty rationale. We explored (1) how students are articulating uncertainty within the various elements of the task and (2) the relationship between the way the task is presented and the way students are articulating uncertainty. We found that (1) while the majority of students did not express uncertainty in either explanation or uncertainty rationale, students who did express uncertainty in their explanations did so scientifically without being prompted explicitly, (2) students' uncertainty ratings and rationales revealed a mix of their personal confidence and uncertainty related to science, and (3) if a task presented noisy data, students were less likely to express uncertainty in their explanations.

  1. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  2. Probabilistic simulation of uncertainties in thermal structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Shiao, Michael

    1990-01-01

    Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.

  3. Adaptive strategies for materials design using uncertainties

    SciTech Connect

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  4. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  5. Adaptive Strategies for Materials Design using Uncertainties

    PubMed Central

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-01

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties. PMID:26792532

  6. Uncertainty in Vs30-based site response

    USGS Publications Warehouse

    Thompson, Eric; Wald, David J.

    2016-01-01

    Methods that account for site response range in complexity from simple linear categorical adjustment factors to sophisticated nonlinear constitutive models. Seismic‐hazard analysis usually relies on ground‐motion prediction equations (GMPEs); within this framework site response is modeled statistically with simplified site parameters that include the time‐averaged shear‐wave velocity to 30 m (VS30) and basin depth parameters. Because VS30 is not known in most locations, it must be interpolated or inferred through secondary information such as geology or topography. In this article, we analyze a subset of stations for which VS30 has been measured to address effects of VS30 proxies on the uncertainty in the ground motions as modeled by GMPEs. The stations we analyze also include multiple recordings, which allow us to compute the repeatable site effects (or empirical amplification factors [EAFs]) from the ground motions. Although all methods exhibit similar bias, the proxy methods only reduce the ground‐motion standard deviations at long periods when compared to GMPEs without a site term, whereas measured VS30 values reduce the standard deviations at all periods. The standard deviation of the ground motions are much lower when the EAFs are used, indicating that future refinements of the site term in GMPEs have the potential to substantially reduce the overall uncertainty in the prediction of ground motions by GMPEs.

  7. Exploring Uncertainty with Projectile Launchers

    ERIC Educational Resources Information Center

    Orzel, Chad; Reich, Gary; Marr, Jonathan

    2012-01-01

    The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…

  8. Uncertainty Can Increase Explanatory Credibility

    DTIC Science & Technology

    2013-08-01

    metacognitive cue to infer their conversational partner’s depth of processing . Keywords: explanations, confidence, uncertainty, collaborative reasoning...scope, i.e., those that account for only observed phenomena (Khemlani, Sussman, & Oppenheimer , 2011). These preferences show that properties intrinsic...Fischhoff, & Phillips , 1982; Lindley, 1982; McClelland & Bolger, 1994). Much of the research on subjective confidence addresses how individuals

  9. Spatial uncertainty and ecological models

    SciTech Connect

    Jager, Yetta; King, Anthony Wayne

    2004-07-01

    Applied ecological models that are used to understand and manage natural systems often rely on spatial data as input. Spatial uncertainty in these data can propagate into model predictions. Uncertainty analysis, sensitivity analysis, error analysis, error budget analysis, spatial decision analysis, and hypothesis testing using neutral models are all techniques designed to explore the relationship between variation in model inputs and variation in model predictions. Although similar methods can be used to answer them, these approaches address different questions. These approaches differ in (a) whether the focus is forward or backward (forward to evaluate the magnitude of variation in model predictions propagated or backward to rank input parameters by their influence); (b) whether the question involves model robustness to large variations in spatial pattern or to small deviations from a reference map; and (c) whether processes that generate input uncertainty (for example, cartographic error) are of interest. In this commentary, we propose a taxonomy of approaches, all of which clarify the relationship between spatial uncertainty and the predictions of ecological models. We describe existing techniques and indicate a few areas where research is needed.

  10. Saccade Adaptation and Visual Uncertainty

    PubMed Central

    Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635

  11. The face of uncertainty eats.

    PubMed

    Corwin, Rebecca L W

    2011-09-01

    The idea that foods rich in fat and sugar may be addictive has generated much interest, as well as controversy, among both scientific and lay communities. Recent research indicates that fatty and sugary food in-and-of itself is not addictive. Rather, the food and the context in which it is consumed interact to produce an addiction-like state. One of the contexts that appears to be important is the intermittent opportunity to consume foods rich in fat and sugar in environments where food is plentiful. Animal research indicates that, under these conditions, intake of the fatty sugary food escalates across time and binge-type behavior develops. However, the mechanisms that account for the powerful effect of intermittency on ingestive behavior have only begun to be elucidated. In this review, it is proposed that intermittency stimulates appetitive behavior that is associated with uncertainty regarding what, when, and how much of the highly palatable food to consume. Uncertainty may stimulate consumption of optional fatty and sugary treats due to differential firing of midbrain dopamine neurons, activation of the stress axis, and involvement of orexin signaling. In short, uncertainty may produce an aversive state that bingeing on palatable food can alleviate, however temporarily. "Food addiction" may not be "addiction" to food at all; it may be a response to uncertainty within environments of food abundance.

  12. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  13. Structural Damage Assessment under Uncertainty

    NASA Astrophysics Data System (ADS)

    Lopez Martinez, Israel

    Structural damage assessment has applications in the majority of engineering structures and mechanical systems ranging from aerospace vehicles to manufacturing equipment. The primary goals of any structural damage assessment and health monitoring systems are to ascertain the condition of a structure and to provide an evaluation of changes as a function of time as well as providing an early-warning of an unsafe condition. There are many structural heath monitoring and assessment techniques developed for research using numerical simulations and scaled structural experiments. However, the transition from research to real-world structures has been rather slow. One major reason for this slow-progress is the existence of uncertainty in every step of the damage assessment process. This dissertation research involved the experimental and numerical investigation of uncertainty in vibration-based structural health monitoring and development of robust detection and localization methods. The basic premise of vibration-based structural health monitoring is that changes in structural characteristics, such as stiffness, mass and damping, will affect the global vibration response of the structure. The diagnostic performance of vibration-based monitoring system is affected by uncertainty sources such as measurement errors, environmental disturbances and parametric modeling uncertainties. To address diagnostic errors due to irreducible uncertainty, a pattern recognition framework for damage detection has been developed to be used for continuous monitoring of structures. The robust damage detection approach developed is based on the ensemble of dimensional reduction algorithms for improved damage-sensitive feature extraction. For damage localization, the determination of an experimental structural model was performed based on output-only modal analysis. An experimental model correlation technique is developed in which the discrepancies between the undamaged and damaged modal data are

  14. Quantifying and reducing uncertainties in cancer therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Liu, Zhonglin; Caucci, Luca; Hoppin, John W.

    2015-03-01

    There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.

  15. Groundwater management under sustainable yield uncertainty

    NASA Astrophysics Data System (ADS)

    Delottier, Hugo; Pryet, Alexandre; Dupuy, Alain

    2015-04-01

    The definition of the sustainable yield (SY) of a groundwater system consists in adjusting pumping rates so as to avoid groundwater depletion and preserve environmental flows. Once stakeholders have defined which impacts can be considered as "acceptable" for both environmental and societal aspects, hydrogeologists use groundwater models to estimate the SY. Yet, these models are based on a simplification of actual groundwater systems, whose hydraulic properties are largely unknown. As a result, the estimated SY is subject to "predictive" uncertainty. We illustrate the issue with a synthetic homogeneous aquifer system in interaction with a stream for steady state and transient conditions. Simulations are conducted with the USGS MODFLOW finite difference model with the river-package. A synthetic dataset is first generated with the numerical model that will further be considered as the "observed" state. In a second step, we conduct the calibration operation as hydrogeologists dealing with real word, unknown groundwater systems. The RMSE between simulated hydraulic heads and the synthetic "observed" values is used as objective function. But instead of simply "calibrating" model parameters, we explore the value of the objective function in the parameter space (hydraulic conductivity, storage coefficient and total recharge). We highlight the occurrence of an ellipsoidal "null space", where distinct parameter sets lead to equally low values for the objective function. The optimum of the objective function is not unique, which leads to a range of possible values for the SY. With a large confidence interval for the SY, the use of modeling results for decision-making is challenging. We argue that prior to modeling operations, efforts must be invested so as to narrow the intervals of likely parameter values. Parameter space exploration is effective to estimate SY uncertainty, but not efficient because of its computational burden and is therefore inapplicable for real world

  16. LDRD Final Report: Capabilities for Uncertainty in Predictive Science.

    SciTech Connect

    Phipps, Eric Todd; Eldred, Michael S; Salinger, Andrew G.; Webster, Clayton G.

    2008-10-01

    Predictive simulation of systems comprised of numerous interconnected, tightly coupled com-ponents promises to help solve many problems of scientific and national interest. Howeverpredictive simulation of such systems is extremely challenging due to the coupling of adiverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure togain computational efficiency. The traditional layering of uncertainty quantification aroundnonlinear solution processes is inverted to allow for heterogeneous uncertainty quantificationmethods to be applied to each component in a coupled system. Moreover this approachallows stochastic dimension reduction techniques to be applied at each coupling interface.The mathematical feasibility of these ideas is investigated in this report, and mathematicalformulations for the resulting stochastically coupled nonlinear systems are developed.3

  17. Stereo-particle image velocimetry uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  18. Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Gumbert, Clyde

    2017-01-01

    The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.

  19. An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2014-01-01

    This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.

  20. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  1. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  2. Visualizing Flow of Uncertainty through Analytical Processes.

    PubMed

    Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu

    2012-12-01

    Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.

  3. Laser triangulation: fundamental uncertainty in distance measurement.

    PubMed

    Dorsch, R G; Häusler, G; Herrmann, J M

    1994-03-01

    We discuss the uncertainty limit in distance sensing by laser triangulation. The uncertainty in distance measurement of laser triangulation sensors and other coherent sensors is limited by speckle noise. Speckle arises because of the coherent illumination in combination with rough surfaces. A minimum limit on the distance uncertainty is derived through speckle statistics. This uncertainty is a function of wavelength, observation aperture, and speckle contrast in the spot image. Surprisingly, it is the same distance uncertainty that we obtained from a single-photon experiment and from Heisenberg's uncertainty principle. Experiments confirm the theory. An uncertainty principle connecting lateral resolution and distance uncertainty is introduced. Design criteria for a sensor with minimum distanc uncertainty are determined: small temporal coherence, small spatial coherence, a large observation aperture.

  4. Probabilistic forecasts based on radar rainfall uncertainty

    NASA Astrophysics Data System (ADS)

    Liguori, S.; Rico-Ramirez, M. A.

    2012-04-01

    The potential advantages resulting from integrating weather radar rainfall estimates in hydro-meteorological forecasting systems is limited by the inherent uncertainty affecting radar rainfall measurements, which is due to various sources of error [1-3]. The improvement of quality control and correction techniques is recognized to play a role for the future improvement of radar-based flow predictions. However, the knowledge of the uncertainty affecting radar rainfall data can also be effectively used to build a hydro-meteorological forecasting system in a probabilistic framework. This work discusses the results of the implementation of a novel probabilistic forecasting system developed to improve ensemble predictions over a small urban area located in the North of England. An ensemble of radar rainfall fields can be determined as the sum of a deterministic component and a perturbation field, the latter being informed by the knowledge of the spatial-temporal characteristics of the radar error assessed with reference to rain-gauges measurements. This approach is similar to the REAL system [4] developed for use in the Southern-Alps. The radar uncertainty estimate can then be propagated with a nowcasting model, used to extrapolate an ensemble of radar rainfall forecasts, which can ultimately drive hydrological ensemble predictions. A radar ensemble generator has been calibrated using radar rainfall data made available from the UK Met Office after applying post-processing and corrections algorithms [5-6]. One hour rainfall accumulations from 235 rain gauges recorded for the year 2007 have provided the reference to determine the radar error. Statistics describing the spatial characteristics of the error (i.e. mean and covariance) have been computed off-line at gauges location, along with the parameters describing the error temporal correlation. A system has then been set up to impose the space-time error properties to stochastic perturbations, generated in real-time at

  5. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  6. Estimation of uncertainty for fatigue growth rate at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.

    2014-01-01

    Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.

  7. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in

  8. Uncertainties in stellar ages provided by grid techniques

    NASA Astrophysics Data System (ADS)

    Prada Moroni, P. G.; Valle, G.; Dell'Omodarme, M.; Degl'Innocenti, S.

    2016-09-01

    The determination of the age of single stars by means of grid-based techniques is a well established method. We discuss the impact on these estimates of the uncertainties in several ingredients routinely adopted in stellar computations. The systematic bias on age determination caused by varying the assumed initial helium abundance, the mixing-length and convective core overshooting parameters, and the microscopic diffusion are quantified and compared with the statistical error owing to the current uncertainty in the observations. The typical uncertainty in the observations accounts for 1 σ statistical relative error in age determination ranging on average from about -35 % to +42 %, depending on the mass. However, the age's relative error strongly depends on the evolutionary phase and can be higher than 120 % for stars near the zero-age main-sequence, while it is typically about 20 % or lower in the advanced main-sequence phase. A variation of ± 1 in the helium-to-metal enrichment ratio induces a quite modest systematic bias on age estimates. The maximum bias due to the presence of the convective core overshooting is -7 % for β = 0.2 and -13 % for β = 0.4. The main sources of bias are the uncertainty in the mixing-length value and the neglect of microscopic diffusion, which account each for a bias comparable to the random error uncertainty.

  9. Uncertainty Analysis in Humidity Measurements by the Psychrometer Method

    PubMed Central

    Chen, Jiunyuan; Chen, Chiachung

    2017-01-01

    The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15–50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty. PMID:28216599

  10. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  11. Uncertainty Quantification for Monitoring of Civil Structures from Vibration Measurements

    NASA Astrophysics Data System (ADS)

    Döhler, Michael; Mevel, Laurent

    2014-05-01

    Health Monitoring of civil structures can be performed by detecting changes in the modal parameters of a structure, or more directly in the measured vibration signals. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. In this talk, a strategy for quantifying the uncertainties of modal parameter estimates from a subspace-based system identification approach is presented and the importance of uncertainty quantification in monitoring approaches is shown. Furthermore, a damage detection method is presented, which is based on the direct comparison of the measured vibration signals without estimating modal parameters, while taking the statistical uncertainty in the signals correctly into account. The usefulness of both strategies is illustrated on data from a progressive damage action on a prestressed concrete bridge. References E. Carden and P. Fanning. Vibration based condition monitoring: a review. Structural Health Monitoring, 3(4):355-377, 2004. M. Döhler and L. Mevel. Efficient multi-order uncertainty computation for stochastic subspace identification. Mechanical Systems and Signal Processing, 38(2):346-366, 2013. M. Döhler, L. Mevel, and F. Hille. Subspace-based damage detection under changes in the ambient excitation statistics. Mechanical Systems and Signal Processing, 45(1):207-224, 2014.

  12. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval.

    PubMed

    Liu, Mengchen; Liu, Shixia; Zhu, Xizhou; Liao, Qinying; Wei, Furu; Pan, Shimei

    2016-01-01

    Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.

  13. Uncertainty Analysis in Humidity Measurements by the Psychrometer Method.

    PubMed

    Chen, Jiunyuan; Chen, Chiachung

    2017-02-14

    The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15-50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty.

  14. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  15. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  16. A Stronger Multi-observable Uncertainty Relation

    NASA Astrophysics Data System (ADS)

    Song, Qiu-Cheng; Li, Jun-Li; Peng, Guang-Xiong; Qiao, Cong-Feng

    2017-03-01

    Uncertainty relation lies at the heart of quantum mechanics, characterizing the incompatibility of non-commuting observables in the preparation of quantum states. An important question is how to improve the lower bound of uncertainty relation. Here we present a variance-based sum uncertainty relation for N incompatible observables stronger than the simple generalization of an existing uncertainty relation for two observables. Further comparisons of our uncertainty relation with other related ones for spin- and spin-1 particles indicate that the obtained uncertainty relation gives a better lower bound.

  17. Adjoint-Based Uncertainty Quantification with MCNP

    NASA Astrophysics Data System (ADS)

    Seifried, Jeffrey Edwin

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  18. A Stronger Multi-observable Uncertainty Relation

    PubMed Central

    Song, Qiu-Cheng; Li, Jun-Li; Peng, Guang-Xiong; Qiao, Cong-Feng

    2017-01-01

    Uncertainty relation lies at the heart of quantum mechanics, characterizing the incompatibility of non-commuting observables in the preparation of quantum states. An important question is how to improve the lower bound of uncertainty relation. Here we present a variance-based sum uncertainty relation for N incompatible observables stronger than the simple generalization of an existing uncertainty relation for two observables. Further comparisons of our uncertainty relation with other related ones for spin- and spin-1 particles indicate that the obtained uncertainty relation gives a better lower bound. PMID:28317917

  19. The effect of model uncertainty on cooperation in sensorimotor interactions

    PubMed Central

    Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. A.

    2013-01-01

    Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions. PMID:23945266

  20. Statistical Modeling of Epistemic Uncertainty in RANS Turbulence Models

    NASA Astrophysics Data System (ADS)

    Rahbari, Iman; Esfahanian, Vahid

    2014-11-01

    RANS turbulence models are widely used in industrial applications thanks to their low computational costs. However, they introduce model-form uncertainty originating from eddy-viscosity hypothesis, assumptions behind transport equations of turbulent properties, free parameters in the models, and wall functions. In contrast, DNS provides detailed and accurate results but in high computational costs making it unaffordable in industrial uses. Therefore, quantification of structural uncertainty in RANS models using DNS data could help engineers to make better decisions from the results of turbulence models. In this study, a new and efficient method for statistical modeling of uncertainties in RANS models is presented, in which deviation of predicted Reynolds stress tensor from results of DNS data is modeled through a Gaussian Random Field. A new covariance kernel is proposed based on eigendecomposition of a sample kernel, hyperparameters are found by minimization of negative log likelihood employing Particle Swarm Optimization algorithm. Thereafter, the random field is sampled using Karhunen-Loeve expansion followed by solving RANS equations to obtain the quantity of interest for each sample as uncertainty propagation. In the present study, fully developed channel flow as well as flow in a converging-diverging channel are considered as test cases.

  1. Automated Generation of Tabular Equations of State with Uncertainty Information

    NASA Astrophysics Data System (ADS)

    Carpenter, John H.; Robinson, Allen C.; Debusschere, Bert J.; Mattsson, Ann E.

    2015-06-01

    As computational science pushes toward higher fidelity prediction, understanding the uncertainty associated with closure models, such as the equation of state (EOS), has become a key focus. Traditional EOS development often involves a fair amount of art, where expert modelers may appear as magicians, providing what is felt to be the closest possible representation of the truth. Automation of the development process gives a means by which one may demystify the art of EOS, while simultaneously obtaining uncertainty information in a manner that is both quantifiable and reproducible. We describe our progress on the implementation of such a system to provide tabular EOS tables with uncertainty information to hydrocodes. Key challenges include encoding the artistic expert opinion into an algorithmic form and preserving the analytic models and uncertainty information in a manner that is both accurate and computationally efficient. Results are demonstrated on a multi-phase aluminum model. *Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. An Uncertainty Quantification System for Tabular Equations of State

    NASA Astrophysics Data System (ADS)

    Carpenter, John; Robinson, Allen; Debusschere, Bert; Mattsson, Ann; Drake, Richard; Rider, William

    2013-06-01

    Providing analysts with information regarding the accuracy of computational models is key for enabling predictive design and engineering. Uncertainty in material models can make significant contributions to the overall uncertainty in calculations. As a first step toward tackling this large problem, we present an uncertainty quantification system for tabular equations of state (EOS). First a posterior distribution of EOS model parameters is inferred using Bayes rule and a set of experimental and computational data. EOS tables are generated for parameter states sampled from the posterior distribution. A new unstructured triangular table format allows for capturing multi-phase model behavior. A principal component analysis then reduces this set of tables to a mean table and most significant perturbations. This final set of tables is provided to hydrocodes for performing simulations using standard non-intrusive uncertainty propagation methods. A multi-phase aluminum model is used to demonstrate the system. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs.

  4. Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer

    NASA Astrophysics Data System (ADS)

    Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain

    2015-09-01

    Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.

  5. Uncertainty relation for mutual information

    NASA Astrophysics Data System (ADS)

    Schneeloch, James; Broadbent, Curtis J.; Howell, John C.

    2014-12-01

    We postulate the existence of a universal uncertainty relation between the quantum and classical mutual informations between pairs of quantum systems. Specifically, we propose that the sum of the classical mutual information, determined by two mutually unbiased pairs of observables, never exceeds the quantum mutual information. We call this the complementary-quantum correlation (CQC) relation and prove its validity for pure states, for states with one maximally mixed subsystem, and for all states when one measurement is minimally disturbing. We provide results of a Monte Carlo simulation suggesting that the CQC relation is generally valid. Importantly, we also show that the CQC relation represents an improvement to an entropic uncertainty principle in the presence of a quantum memory, and that it can be used to verify an achievable secret key rate in the quantum one-time pad cryptographic protocol.

  6. Aspects of complementarity and uncertainty

    NASA Astrophysics Data System (ADS)

    Vathsan, Radhika; Qureshi, Tabish

    2016-08-01

    The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.

  7. Improved Event Location Uncertainty Estimates

    DTIC Science & Technology

    2008-06-30

    model (such as Gaussian, spherical or exponential) typically used in geostatistics, we define the robust variogram model as the median regression curve...variogram model estimation We define the robust variogram model as the median regression curve of the residual difference squares for station pairs of...develop methodologies that improve location uncertainties in the presence of correlated, systematic model errors and non-Gaussian measurement errors. We

  8. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the

  9. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    NASA Astrophysics Data System (ADS)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2016-11-01

    Within the continuum mechanics framework, there are two main approaches to model interfaces: classical cohesive zone modeling (CZM) and interface elasticity theory. The classical CZM deals with geometrically non-coherent interfaces for which the constitutive relation is expressed in terms of traction-separation laws. However, CZM lacks any response related to the stretch of the mid-plane of the interface. This issue becomes problematic particularly at small scales with increasing interface area to bulk volume ratios, where interface elasticity is no longer negligible. The interface elasticity theory, in contrast to CZM, deals with coherent interfaces that are endowed with their own energetic structures, and thus is capable of capturing elastic resistance to tangential stretch. Nonetheless, the interface elasticity theory suffers from the lack of inelastic material response, regardless of the strain level. The objective of this contribution therefore is to introduce a generalized mechanical interface model that couples both the elastic response along the interface and the cohesive response across the interface whereby interface degradation is taken into account. The material degradation of the interface mid-plane is captured by a non-local damage model of integral-type. The out-of-plane decohesion is described by a classical cohesive zone model. These models are then coupled through their corresponding damage variables. The non-linear governing equations and the weak forms thereof are derived. The numerical implementation is carried out using the finite element method and consistent tangents are derived. Finally, a series of numerical examples is studied to provide further insight into the problem and to carefully elucidate key features of the proposed theory.

  10. Uncertainty propagation in nuclear forensics.

    PubMed

    Pommé, S; Jerome, S M; Venchiarutti, C

    2014-07-01

    Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent-daughter pairs and the need for more precise half-life data is examined.

  11. Blade tip timing (BTT) uncertainties

    NASA Astrophysics Data System (ADS)

    Russhard, Pete

    2016-06-01

    Blade Tip Timing (BTT) is an alternative technique for characterising blade vibration in which non-contact timing probes (e.g. capacitance or optical probes), typically mounted on the engine casing (figure 1), and are used to measure the time at which a blade passes each probe. This time is compared with the time at which the blade would have passed the probe if it had been undergoing no vibration. For a number of years the aerospace industry has been sponsoring research into Blade Tip Timing technologies that have been developed as tools to obtain rotor blade tip deflections. These have been successful in demonstrating the potential of the technology, but rarely produced quantitative data, along with a demonstration of a traceable value for measurement uncertainty. BTT technologies have been developed under a cloak of secrecy by the gas turbine OEM's due to the competitive advantages it offered if it could be shown to work. BTT measurements are sensitive to many variables and there is a need to quantify the measurement uncertainty of the complete technology and to define a set of guidelines as to how BTT should be applied to different vehicles. The data shown in figure 2 was developed from US government sponsored program that bought together four different tip timing system and a gas turbine engine test. Comparisons showed that they were just capable of obtaining measurement within a +/-25% uncertainty band when compared to strain gauges even when using the same input data sets.

  12. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  13. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  14. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  15. Measuring uncertainty by extracting fuzzy rules using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.

    1991-01-01

    Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.

  16. Facility Measurement Uncertainty Analysis at NASA GRC

    NASA Technical Reports Server (NTRS)

    Stephens, Julia; Hubbard, Erin

    2016-01-01

    This presentation provides and overview of the measurement uncertainty analysis currently being implemented in various facilities at NASA GRC. This presentation includes examples pertinent to the turbine engine community (mass flow and fan efficiency calculation uncertainties.

  17. Two new kinds of uncertainty relations

    NASA Technical Reports Server (NTRS)

    Uffink, Jos

    1994-01-01

    We review a statistical-geometrical and a generalized entropic approach to the uncertainty principle. Both approaches provide a strengthening and generalization of the standard Heisenberg uncertainty relations, but in different directions.

  18. Conservation laws, uncertainty relations, and quantum limits of measurements.

    PubMed

    Ozawa, Masanao

    2002-02-04

    The uncertainty relation between the noise operator and the conserved quantity leads to a bound on the accuracy of general measurements. The bound extends the assertion by Wigner, Araki, and Yanase that conservation laws limit the accuracy of "repeatable," or "nondisturbing," measurements to general measurements, and improves the one previously obtained by Yanase for spin measurements. The bound represents an obstacle to making a small quantum computer.

  19. Measuring, Estimating, and Deciding under Uncertainty.

    PubMed

    Michel, Rolf

    2016-03-01

    The problem of uncertainty as a general consequence of incomplete information and the approach to quantify uncertainty in metrology is addressed. Then, this paper discusses some of the controversial aspects of the statistical foundation of the concepts of uncertainty in measurements. The basics of the ISO Guide to the Expression of Uncertainty in Measurement as well as of characteristic limits according to ISO 11929 are described and the needs for a revision of the latter standard are explained.

  20. Quantifying Snow Volume Uncertainty from Repeat Terrestrial Laser Scanning Observations

    NASA Astrophysics Data System (ADS)

    Gadomski, P. J.; Hartzell, P. J.; Finnegan, D. C.; Glennie, C. L.; Deems, J. S.

    2014-12-01

    Terrestrial laser scanning (TLS) systems are capable of providing rapid, high density, 3D topographic measurements of snow surfaces from increasing standoff distances. By differencing snow surface with snow free measurements within a common scene, snow depths and volumes can be estimated. These data can support operational water management decision-making when combined with measured or modeled snow densities to estimate basin water content, evaluate in-situ data, or drive operational hydrologic models. In addition, change maps from differential TLS scans can also be used to support avalanche control operations to quantify loading patterns for both pre-control planning and post-control assessment. However, while methods for computing volume from TLS point cloud data are well documented, a rigorous quantification of the volumetric uncertainty has yet to be presented. Using repeat TLS data collected at the Arapahoe Basin Ski Area in Summit County, Colorado, we demonstrate the propagation of TLS point measurement and cloud registration uncertainties into 3D covariance matrices at the point level. The point covariances are then propagated through a volume computation to arrive at a single volume uncertainty value. Results from two volume computation methods are compared and the influence of data voids produced by occlusions examined.

  1. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  2. Regarding Uncertainty in Teachers and Teaching

    ERIC Educational Resources Information Center

    Helsing, Deborah

    2007-01-01

    The literature on teacher uncertainty suggests that it is a significant and perhaps inherent feature of teaching. Yet there is disagreement about the effects of these uncertainties on teachers as well as about the ways that teachers should regard them. Recognition of uncertainties can be viewed alternatively as a liability or an asset to effective…

  3. Risk Analysis and Uncertainty: Implications for Counselling

    ERIC Educational Resources Information Center

    Hassenzahl, David

    2004-01-01

    Over the past two decades, the risk analysis community has made substantial advances in understanding and describing uncertainty. Uncertainty is ubiquitous, complex, both quantitative and qualitative in nature, and often irreducible. Uncertainty thus creates a challenge when using risk analysis to evaluate the rationality of group and individual…

  4. Identifying the Rhetoric of Uncertainty Reduction.

    ERIC Educational Resources Information Center

    Williams, David E.

    Offering a rhetorical perspective of uncertainty reduction, this paper (1) discusses uncertainty reduction theory and dramatism; (2) identifies rhetorical strategies inherent in C. W. Berger and R. J. Calabrese's theory; (3) extends predicted outcome value to influenced outcome value; and (4) argues that the goal of uncertainty reduction and…

  5. Quantum mechanics and the generalized uncertainty principle

    SciTech Connect

    Bang, Jang Young; Berger, Micheal S.

    2006-12-15

    The generalized uncertainty principle has been described as a general consequence of incorporating a minimal length from a theory of quantum gravity. We consider a simple quantum mechanical model where the operator corresponding to position has discrete eigenvalues and show how the generalized uncertainty principle results for minimum uncertainty wave packets.

  6. Uncertainty quantification in capacitive RF MEMS switches

    NASA Astrophysics Data System (ADS)

    Pax, Benjamin J.

    propagation of uncertainty are performed using this surrogate model. The first step in the analysis is Bayesian calibration of the creep related parameters. A computational model of the frog-leg varactor is created, and the computed creep deflection of the device over 800 hours is used to generate a surrogate model using a polynomial chaos expansion in Hermite polynomials. Parameters related to the creep phenomenon are calibrated using Bayesian calibration with experimental deflection data from the frog-leg device. The calibrated input distributions are subsequently propagated through a surrogate gPC model for the PRISM MEMS switch to produce probability density functions of the maximum membrane deflection of the membrane over several thousand hours. The assumptions related to the Bayesian calibration and forward propagation are analyzed to determine the sensitivity to these assumptions of the calibrated input distributions and propagated output distributions of the PRISM device. The work is an early step in understanding the role of geometric variability, model uncertainty, numerical errors and experimental uncertainties in the long-term performance of RF-MEMS.

  7. Extension of sensitivity and uncertainty analysis for long term dose assessment of high level nuclear waste disposal sites to uncertainties in the human behaviour.

    PubMed

    Albrecht, Achim; Miquel, Stéphan

    2010-01-01

    Biosphere dose conversion factors are computed for the French high-level geological waste disposal concept and to illustrate the combined probabilistic and deterministic approach. Both (135)Cs and (79)Se are used as examples. Probabilistic analyses of the system considering all parameters, as well as physical and societal parameters independently, allow quantification of their mutual impact on overall uncertainty. As physical parameter uncertainties decreased, for example with the availability of further experimental and field data, the societal uncertainties, which are less easily constrained, particularly for the long term, become more and more significant. One also has to distinguish uncertainties impacting the low dose portion of a distribution from those impacting the high dose range, the latter having logically a greater impact in an assessment situation. The use of cumulative probability curves allows us to quantify probability variations as a function of the dose estimate, with the ratio of the probability variation (slope of the curve) indicative of uncertainties of different radionuclides. In the case of (135)Cs with better constrained physical parameters, the uncertainty in human behaviour is more significant, even in the high dose range, where they increase the probability of higher doses. For both radionuclides, uncertainties impact more strongly in the intermediate than in the high dose range. In an assessment context, the focus will be on probabilities of higher dose values. The probabilistic approach can furthermore be used to construct critical groups based on a predefined probability level and to ensure that critical groups cover the expected range of uncertainty.

  8. Simple uncertainty propagation for early design phase aircraft sizing

    NASA Astrophysics Data System (ADS)

    Lenz, Annelise

    Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the

  9. Systematic and random uncertainties of HOAPS-3.2 evaporation

    NASA Astrophysics Data System (ADS)

    Kinzel, Julian; Fennig, Karsten; Schröder, Marc; Andersson, Axel; Bumke, Karl; Dietzsch, Felix

    2015-04-01

    The German Research Foundation (DFG) funds the research programme 'FOR1740 - Atlantic freshwater cycle', which aims at analysing and better understanding the freshwater budget of the Atlantic Ocean and the role of freshwater fluxes (evaporation minus precipitation) in context of oceanic surface salinity variability. It is well-known that these freshwater fluxes play an essential role in the global hydrological cycle and thus act as a key boundary condition for coupled ocean-atmosphere general circulation models. However, it remains unclear as to how uncertain evaporation (E) and precipitation (P ) are. Once quantified, freshwater flux fields and their underlying total uncertainty (systematic plus random) may be assimilated into ocean models to compute ocean transports and run-off estimates, which in turn serve as a stringent test on the quality of the input data. The Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data (HOAPS) (Andersson et al. (2010), Fennig et al. (2012)) is an entirely satellite-based climatology, based on microwave radiometers, overcoming the lack of oceanic in-situ records. Its most current version, HOAPS-3.2, comprises 21 years (1987-2008) of pixel-level resolution data of numerous geophysical parameters over the global ice-free oceans. Amongst others, these include wind speed (u), near-surface specific humidity (q), and sea surface temperature (SST). Their uncertainties essentially contribute to the uncertainty in latent heat flux (LHF) and consequently to that of evaporation (E). Here, we will present HOAPS-3.2 pixel-level total uncertainty estimates of evaporation, based on a full error propagation of uncertainties in u, q, and SST. Both systematic and random uncertainty components are derived on the basis of collocated match-ups of satellite pixels, selected buoys, and ship records. The in-situ data is restricted to 1995 until 2008 and is provided by the Seewetteramt Hamburg as well as ICOADS Version 2.5 (Woodruff et al

  10. Calculating Measurement Uncertainties for Mass Spectrometry Data

    NASA Astrophysics Data System (ADS)

    Essex, R. M.; Goldberg, S. A.

    2006-12-01

    A complete and transparent characterization of measurement uncertainty is fundamentally important to the interpretation of analytical results. We have observed that the calculation and reporting of uncertainty estimates for isotopic measurement from a variety of analytical facilities are inconsistent, making it difficult to compare and evaluate data. Therefore, we recommend an approach to uncertainty estimation that has been adopted by both US national metrology facilities and is becoming widely accepted within the analytical community. This approach is outlined in the ISO "Guide to the Expression of Uncertainty in Measurement" (GUM). The GUM approach to uncertainty estimation includes four major steps: 1) Specify the measurand; 2) Identify uncertainty sources; 3) Quantify components by determining the standard uncertainty (u) for each component; and 4) Calculate combined standard uncertainty (u_c) by using established propagation laws to combine the various components. To obtain a desired confidence level, the combined standard uncertainty is multiplied by a coverage factor (k) to yield an expanded uncertainty (U). To be consistent with the GUM principles, it is also necessary create an uncertainty budget, which is a listing of all the components comprising the uncertainty and their relative contribution to the combined standard uncertainty. In mass spectrometry, Step 1 is normally the determination of an isotopic ratio for a particular element. Step 2 requires the identification of the many potential sources of measurement variability and bias including: gain, baseline, cup efficiency, Schottky noise, counting statistics, CRM uncertainties, yield calibrations, linearity calibrations, run conditions, and filament geometry. Then an equation expressing the relationship of all of the components to the measurement value must be written. To complete Step 3, these potential sources of uncertainty must be characterized (Type A or Type B) and quantified. This information

  11. Perspectives on optimization under uncertainty: Algorithms and applications.

    SciTech Connect

    Swiler, Laura Painton; Wojtkiewicz, Steven F., Jr.; Eldred, Michael Scott; Giunta, Anthony Andrew; Trucano, Timothy Guy

    2005-12-01

    This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.

  12. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  13. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.

  14. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  15. Medical Need, Equality, and Uncertainty.

    PubMed

    Horne, L Chad

    2016-10-01

    Many hold that distributing healthcare according to medical need is a requirement of equality. Most egalitarians believe, however, that people ought to be equal on the whole, by some overall measure of well-being or life-prospects; it would be a massive coincidence if distributing healthcare according to medical need turned out to be an effective way of promoting equality overall. I argue that distributing healthcare according to medical need is important for reducing individuals' uncertainty surrounding their future medical needs. In other words, distributing healthcare according to medical need is a natural feature of healthcare insurance; it is about indemnity, not equality.

  16. Uncertainty Relation and Inseparability Criterion

    NASA Astrophysics Data System (ADS)

    Goswami, Ashutosh K.; Panigrahi, Prasanta K.

    2017-02-01

    We investigate the Peres-Horodecki positive partial transpose criterion in the context of conserved quantities and derive a condition of inseparability for a composite bipartite system depending only on the dimensions of its subsystems, which leads to a bi-linear entanglement witness for the two qubit system. A separability inequality using generalized Schrodinger-Robertson uncertainty relation taking suitable operators, has been derived, which proves to be stronger than the bi-linear entanglement witness operator. In the case of mixed density matrices, it identically distinguishes the separable and non separable Werner states.

  17. Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

    2006-01-01

    While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy

  18. Performance of Trajectory Models with Wind Uncertainty

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.

    2009-01-01

    Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.

  19. Differentiate climate change uncertainty from other uncertainty sources for water quality modeling with Bayesian framework

    NASA Astrophysics Data System (ADS)

    Jiang, S.; Liu, M.; Rode, M.

    2011-12-01

    Prediction of water quality under future climate changes is always associated with significant uncertainty resulting from the use of climate models and stochastic weather generator. The future related uncertainty is usually mixed with the intrinsic uncertainty sources arising from model structure and parameterization which present also for modeling past and current events. For an effective water quality management policy, the uncertainty sources have to be differentiated and quantified separately. This work applies the Baysian framework in two steps to quantify the climate change uncertainty as input uncertainty and the parameter uncertainty respectively. The HYPE model (Hydrological Prediction for the Environment) from SMHI is applied to simulate the nutrient (N, P) sources in a 100 km2 agricultural low-land catchment in Germany, Weida. The results show that climate change shifts the uncertainty space in terms of probability density function (PDF), and a large portion of future uncertainty is not covered by current uncertainty.

  20. Aspects of universally valid Heisenberg uncertainty relation

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo; Umetsu, Koichiro

    2013-01-01

    A numerical illustration of a universally valid Heisenberg uncertainty relation, which was proposed recently, is presented by using the experimental data on spin-measurements by J. Erhart et al. [Nat. Phys. 8, 185 (2012)]. This uncertainty relation is closely related to a modified form of the Arthurs-Kelly uncertainty relation, which is also tested by the spin-measurements. The universally valid Heisenberg uncertainty relation always holds, but both the modified Arthurs-Kelly uncertainty relation and the Heisenberg error-disturbance relation proposed by Ozawa, which was analyzed in the original experiment, fail in the present context of spin-measurements, and the cause of their failure is identified with the assumptions of unbiased measurement and disturbance. It is also shown that all the universally valid uncertainty relations are derived from Robertson's relation and thus the essence of the uncertainty relation is exhausted by Robertson's relation, as is widely accepted.

  1. Path planning under spatial uncertainty.

    PubMed

    Wiener, Jan M; Lafon, Matthieu; Berthoz, Alain

    2008-04-01

    In this article, we present experiments studying path planning under spatial uncertainties. In the main experiment, the participants' task was to navigate the shortest possible path to find an object hidden in one of four places and to bring it to the final destination. The probability of finding the object (probability matrix) was different for each of the four places and varied between conditions. Givensuch uncertainties about the object's location, planning a single path is not sufficient. Participants had to generate multiple consecutive plans (metaplans)--for example: If the object is found in A, proceed to the destination; if the object is not found, proceed to B; and so on. The optimal solution depends on the specific probability matrix. In each condition, participants learned a different probability matrix and were then asked to report the optimal metaplan. Results demonstrate effective integration of the probabilistic information about the object's location during planning. We present a hierarchical planning scheme that could account for participants' behavior, as well as for systematic errors and differences between conditions.

  2. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests

  3. Sustainable design of complex industrial and energy systems under uncertainty

    NASA Astrophysics Data System (ADS)

    Liu, Zheng

    Depletion of natural resources, environmental pressure, economic globalization, etc., demand seriously industrial organizations to ensure that their manufacturing be sustainable. On the other hand, the efforts of pursing sustainability also give raise to potential opportunities for improvements and collaborations among various types of industries. Owing to inherent complexity and uncertainty, however, sustainability problems of industrial and energy systems are always very difficult to deal with, which has made industrial practice mostly experience based. For existing research efforts on the study of industrial sustainability, although systems approaches have been applied in dealing with the challenge of system complexity, most of them are still lack in the ability of handling inherent uncertainty. To overcome this limit, there is a research need to develop a new generation of systems approaches by integrating techniques and methods for handling various types of uncertainties. To achieve this objective, this research introduced series of holistic methodologies for sustainable design and decision-making of industrial and energy systems. The introduced methodologies are developed in a systems point of view with the functional components involved in, namely, modeling, assessment, analysis, and decision-making. For different methodologies, the interval-parameter-based, fuzzy-logic-based, and Monte Carlo based methods are selected and applied respectively for handling various types of uncertainties involved, and the optimality of solutions is guaranteed by thorough search or system optimization. The proposed methods are generally applicable for any types of industrial systems, and their efficacy had been successfully demonstrated by the given case studies. Beyond that, a computational tool was designed, which provides functions on the industrial sustainability assessment and decision-making through several convenient and interactive steps of computer operation. This

  4. Uncertainty Quantification Bayesian Framework for Porous Media Flows

    NASA Astrophysics Data System (ADS)

    Demyanov, V.; Christie, M.; Erbas, D.

    2005-12-01

    Uncertainty quantification is an increasingly important aspect of many areas of applied science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of fluids through undersurface reservoirs is an example of a complex system where accuracy in prediction is needed (e.g. in oil industry it is essential for financial reasons). Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks, which is a highly computationally expensive task. This work examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed time series data. The framework is flexible for a wide range of general physical/statistical parametric models, which are used to describe the underlying hydro-geological process in its temporal dynamics. The approach is based on exploration of the parameter space and update of the prior beliefs about what the most likely model definitions are. Optimization problem for a highly parametric physical model usually have multiple solutions, which impact the uncertainty of the made predictions. Stochastic search algorithm (e.g. genetic algorithm) allows to identify multiple "good enough" models in the parameter space. Furthermore, inference of the generated model ensemble via MCMC based algorithm evaluates the posterior probability of the generated models and quantifies uncertainty of the predictions. Machine learning algorithm - Artificial Neural Networks - are used to speed up the identification of regions in parameter space where good matches to observed data can be found. Adaptive nature of ANN allows to develop different ways of integrating them into the Bayesian framework: as direct time

  5. When Does the Uncertainty Become Non-Gaussian

    NASA Astrophysics Data System (ADS)

    Alfriend, K.; Park, I.

    2016-09-01

    The orbit state covariance is used in the conjunction assessment/probability of collision calculation. It can also be a valuable tool in track association, maneuver detection and sensor tasking. These uses all assume that the uncertainty is Gaussian. Studies have shown that the uncertainty at epoch (time of last observation) is reasonably Gaussian, but the neglected nonlinearities in the covariance propagation eventually result in the uncertainty becoming non-Gaussian. Numerical studies have shown that for space objects in low Earth orbit the covariance remains Gaussian the longest in orbital element space. It has been shown that the covariance remains Gaussian for up to 10 days in orbital element space, but becomes non-Gaussian after 2-3 days in Cartesian coordinates for a typical LEO orbit. The fundamental question is when does it become non-Gaussian and how can one given the orbit state and covariance at epoch determine when it occurs. A tool that an operator could use to compute the approximate time when the when the uncertainty becomes non-Gaussian would be useful This paper addresses the development of such a tool.

  6. Isokinetic TWC Evaporator Probe: Calculations and Systemic Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Davison, Craig R.; Strapp, J. Walter; Lilie, Lyle; Ratvasky, Thomas P.; Dumont, Christopher

    2016-01-01

    A new Isokinetic Total Water Content Evaporator (IKP2) was downsized from a prototype instrument, specifically to make airborne measurements of hydrometeor total water content (TWC) in deep tropical convective clouds to assess the new ice crystal Appendix D icing envelope. The probe underwent numerous laboratory and wind tunnel investigations to ensure reliable operation under the difficult high altitude/speed/TWC conditions under which other TWC instruments have been known to either fail, or have unknown performance characteristics and the results are presented in a companion paper. This paper presents the equations used to determine the total water content (TWC) of the sampled atmosphere from the values measured by the IKP2 or necessary ancillary data from other instruments. The uncertainty in the final TWC is determined by propagating the uncertainty in the measured values through the calculations to the final result. Two techniques were used and the results compared. The first is a typical analytical method of propagating uncertainty and the second performs a Monte Carlo simulation. The results are very similar with differences that are insignificant for practical purposes. The uncertainty is between 2 percent and 3 percent at most practical operating conditions. The capture efficiency of the IKP2 was also examined based on a computational fluid dynamic simulation of the original IKP and scaled down to the IKP2. Particles above 24 microns were found to have a capture efficiency greater than 99 percent at all operating conditions.

  7. Isokinetic TWC Evaporator Probe: Calculations and Systemic Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Davison, Craig R.; Strapp, John W.; Lilie, Lyle E.; Ratvasky, Thomas P.; Dumont, Christopher

    2016-01-01

    A new Isokinetic Total Water Content Evaporator (IKP2) was downsized from a prototype instrument, specifically to make airborne measurements of hydrometeor total water content (TWC) in deep tropical convective clouds to assess the new ice crystal Appendix D icing envelope. The probe underwent numerous laboratory and wind tunnel investigations to ensure reliable operation under the difficult high altitude/speed/TWC conditions under which other TWC instruments have been known to either fail, or have unknown performance characteristics and the results are presented in a companion paper (Ref. 1). This paper presents the equations used to determine the total water content (TWC) of the sampled atmosphere from the values measured by the IKP2 or necessary ancillary data from other instruments. The uncertainty in the final TWC is determined by propagating the uncertainty in the measured values through the calculations to the final result. Two techniques were used and the results compared. The first is a typical analytical method of propagating uncertainty and the second performs a Monte Carlo simulation. The results are very similar with differences that are insignificant for practical purposes. The uncertainty is between 2 and 3 percent at most practical operating conditions. The capture efficiency of the IKP2 was also examined based on a computational fluid dynamic simulation of the original IKP and scaled down to the IKP2. Particles above 24 micrometers were found to have a capture efficiency greater than 99 percent at all operating conditions.

  8. Uncertainty quantification in virtual surgery predictions for single ventricle palliation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2014-11-01

    Hemodynamic results from numerical simulations of physiology in patients are invariably presented as deterministic quantities without assessment of associated confidence. Recent advances in cardiovascular simulation and Uncertainty Analysis can be leveraged to challenge this paradigm and to quantify the variability of output quantities of interest, of paramount importance to complement clinical decision making. Physiological variability and errors are responsible for the uncertainty typically associated with measurements in the clinic; starting from a characterization of these quantities in probability, we present applications in the context of estimating the distributions of lumped parameters in 0D models of single-ventricle circulation. We also present results in virtual Fontan palliation surgery, where the variability of both local and systemic hemodynamic indicators is inferred from the uncertainty in pre-operative clinical measurements. Efficient numerical algorithms are required to mitigate the computational cost of propagating the uncertainty through multiscale coupled 0D-3D models of pulsatile flow at the cavopulmonary connection. This work constitutes a first step towards systematic application of robust numerical simulations to virtual surgery predictions.

  9. Uncertainty quantification for large-scale ocean circulation predictions.

    SciTech Connect

    Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

    2010-09-01

    Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

  10. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  11. Synthesis and Control of Flexible Systems with Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Lim, Kyong B.

    2009-01-01

    An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.

  12. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  13. Dosimetric uncertainty in prostate cancer proton radiotherapy

    SciTech Connect

    Lin Liyong; Vargas, Carlos; Hsi Wen; Indelicato, Daniel; Slopsema, Roelf; Li Zuofeng; Yeung, Daniel; Horne, Dave; Palta, Jatinder

    2008-11-15

    Purpose: The authors we evaluate the uncertainty in proton therapy dose distribution for prostate cancer due to organ displacement, varying penumbra width of proton beams, and the amount of rectal gas inside the rectum. Methods and Materials: Proton beam treatment plans were generated for ten prostate patients with a minimum dose of 74.1 cobalt gray equivalent (CGE) to the planning target volume (PTV) while 95% of the PTV received 78 CGE. Two lateral or lateral oblique proton beams were used for each plan. The authors we investigated the uncertainty in dose to the rectal wall (RW) and the bladder wall (BW) due to organ displacement by comparing the dose-volume histograms (DVH) calculated with the original or shifted contours. The variation between DVHs was also evaluated for patients with and without rectal gas in the rectum for five patients who had 16 to 47 cc of visible rectal gas in their planning computed tomography (CT) imaging set. The uncertainty due to the varying penumbra width of the delivered protons for different beam setting options on the proton delivery system was also evaluated. Results: For a 5 mm anterior shift, the relative change in the RW volume receiving 70 CGE dose (V{sub 70}) was 37.9% (5.0% absolute change in 13.2% of a mean V{sub 70}). The relative change in the BW volume receiving 70 CGE dose (V{sub 70}) was 20.9% (4.3% absolute change in 20.6% of a mean V{sub 70}) with a 5 mm inferior shift. A 2 mm penumbra difference in beam setting options on the proton delivery system resulted in the relative variations of 6.1% (0.8% absolute change) and 4.4% (0.9% absolute change) in V{sub 70} of RW and BW, respectively. The data show that the organ displacements produce absolute DVH changes that generally shift the entire isodose line while maintaining the same shape. The overall shape of the DVH curve for each organ is determined by the penumbra and the distance of the target in beam's eye view (BEV) from the block edge. The beam setting option

  14. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    PubMed

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  15. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE PAGES

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; ...

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  16. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    SciTech Connect

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.

  17. Uncertainty in gridded CO2 emissions estimates

    NASA Astrophysics Data System (ADS)

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; Marland, Gregg; Woodard, Dawn

    2016-05-01

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from the use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. Uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.

  18. Optimal invasive species management under multiple uncertainties.

    PubMed

    Kotani, Koji; Kakinaka, Makoto; Matsuda, Hiroyuki

    2011-09-01

    The management programs for invasive species have been proposed and implemented in many regions of the world. However, practitioners and scientists have not reached a consensus on how to control them yet. One reason is the presence of various uncertainties associated with the management. To give some guidance on this issue, we characterize the optimal strategy by developing a dynamic model of invasive species management under uncertainties. In particular, focusing on (i) growth uncertainty and (ii) measurement uncertainty, we identify how these uncertainties affect optimal strategies and value functions. Our results suggest that a rise in growth uncertainty causes the optimal strategy to involve more restrained removals and the corresponding value function to shift up. Furthermore, we also find that a rise in measurement uncertainty affects optimal policies in a highly complex manner, but their corresponding value functions generally shift down as measurement uncertainty rises. Overall, a rise in growth uncertainty can be beneficial, while a rise in measurement uncertainty brings about an adverse effect, which implies the potential gain of precisely identifying the current stock size of invasive species.

  19. The Role of Uncertainty in Climate Science

    NASA Astrophysics Data System (ADS)

    Oreskes, N.

    2012-12-01

    Scientific discussions of climate change place considerable weight on uncertainty. The research frontier, by definition, rests at the interface between the known and the unknown and our scientific investigations necessarily track this interface. Yet, other areas of active scientific research are not necessarily characterized by a similar focus on uncertainty; previous assessments of science for policy, for example, do not reveal such extensive efforts at uncertainty quantification. Why has uncertainty loomed so large in climate science? This paper argues that the extensive discussions of uncertainty surrounding climate change are at least in part a response to the social and political context of climate change. Skeptics and contrarians focus on uncertainty as a political strategy, emphasizing or exaggerating uncertainties as a means to undermine public concern about climate change and delay policy action. The strategy works in part because it appeals to a certain logic: if our knowledge is uncertain, then it makes sense to do more research. Change, as the tobacco industry famously realized, requires justification; doubt favors the status quo. However, the strategy also works by pulling scientists into an "uncertainty framework," inspiring them to respond to the challenge by addressing and quantifying the uncertainties. The problem is that all science is uncertain—nothing in science is ever proven absolutely, positively—so as soon as one uncertainty is addressed, another can be raised, which is precisely what contrarians have done over the past twenty years.

  20. Optimisation of decision making under uncertainty throughout field lifetime: A fractured reservoir example

    NASA Astrophysics Data System (ADS)

    Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin

    2016-10-01

    Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty

  1. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  2. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  3. Stochastic reduced order models for inverse problems under uncertainty.

    PubMed

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  4. Uncertainty Handling in Disaster Management Using Hierarchical Rough Set Granulation

    NASA Astrophysics Data System (ADS)

    Sheikhian, H.; Delavar, M. R.; Stein, A.

    2015-08-01

    Uncertainty is one of the main concerns in geospatial data analysis. It affects different parts of decision making based on such data. In this paper, a new methodology to handle uncertainty for multi-criteria decision making problems is proposed. It integrates hierarchical rough granulation and rule extraction to build an accurate classifier. Rough granulation provides information granules with a detailed quality assessment. The granules are the basis for the rule extraction in granular computing, which applies quality measures on the rules to obtain the best set of classification rules. The proposed methodology is applied to assess seismic physical vulnerability in Tehran. Six effective criteria reflecting building age, height and material, topographic slope and earthquake intensity of the North Tehran fault have been tested. The criteria were discretized and the data set was granulated using a hierarchical rough method, where the best describing granules are determined according to the quality measures. The granules are fed into the granular computing algorithm resulting in classification rules that provide the highest prediction quality. This detailed uncertainty management resulted in 84% accuracy in prediction in a training data set. It was applied next to the whole study area to obtain the seismic vulnerability map of Tehran. A sensitivity analysis proved that earthquake intensity is the most effective criterion in the seismic vulnerability assessment of Tehran.

  5. Uncertainty estimation for resource assessment-an application to coal

    USGS Publications Warehouse

    Schuenemeyer, J.H.; Power, H.C.

    2000-01-01

    The U.S. Geological Survey is conducting a national assessment of coal resources. As part of that assessment, a geostatistical procedure has been developed to estimate the uncertainty of coal resources for the historical categories of geological assurance: measured, indicated, inferred, and hypothetical coal. Data consist of spatially clustered coal thickness measurements from coal beds and/or zones that cover, in some cases, several thousand square kilometers. Our procedure involved trend removal, an examination of spatial correlation, computation of a sample semivariogram, and fitting a semivariogram model. This model provided standard deviations for the uncertainty estimates. The number of sample points (drill holes) in each historical category also was estimated. Measurement error in the thickness of the coal bed/zone was obtained from the fitted model or supplied exogenously. From this information approximate estimates of uncertainty on the historical categories were computed. We illustrate the methodology using drill hole data from the Harmon coal bed located in southwestern North Dakota. The methodology will be applied to approximately 50 coal data sets.

  6. Measuring the uncertainty of coupling

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaojun; Shang, Pengjian

    2015-06-01

    A new information-theoretic measure, called coupling entropy, is proposed here to detect the causal links in complex systems by taking into account the inner composition alignment of temporal structure. It is a permutation-based asymmetric association measure to infer the uncertainty of coupling between two time series. The coupling entropy is found to be effective in the analysis of Hénon maps, where different noises are added to test its accuracy and sensitivity. The coupling entropy is also applied to analyze the relationship between unemployment rate and CPI change in the U.S., where the CPI change turns out to be the driving variable while the unemployment rate is the responding one.

  7. Induction of models under uncertainty

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter

    1986-01-01

    This paper outlines a procedure for performing induction under uncertainty. This procedure uses a probabilistic representation and uses Bayes' theorem to decide between alternative hypotheses (theories). This procedure is illustrated by a robot with no prior world experience performing induction on data it has gathered about the world. The particular inductive problem is the formation of class descriptions both for the tutored and untutored cases. The resulting class definitions are inherently probabilistic and so do not have any sharply defined membership criterion. This robot example raises some fundamental problems about induction; particularly, it is shown that inductively formed theories are not the best way to make predictions. Another difficulty is the need to provide prior probabilities for the set of possible theories. The main criterion for such priors is a pragmatic one aimed at keeping the theory structure as simple as possible, while still reflecting any structure discovered in the data.

  8. [Uncertainty, agency relationships and miscommunication].

    PubMed

    Ruiz Moreno, J

    1999-04-01

    This author finds it curious that for a long time, whenever I have tried to explain what an agent relationship is, everyone understands the concept with ease. "It's common sense," they usually respond. Nonetheless, it is curious to note to what point agent relationships are ignored by those persons having managerial responsibilities in the majority of businesses in any industrial or service field. We use this statement by the author to introduce the fundamental purpose of this article. Probably, once having read it, any professional would have a clearer idea about such important concepts (in order to realize how costs function and how they are generated inside a health organization) as uncertainty, agent relationships or asymmetrical information. Using a British Airways flight as a comparison, it will be difficult not to understand these terms in their practical applications to the health field.

  9. Uncertainty and instream flow standards

    USGS Publications Warehouse

    Castleberry, D.; Cech, J.; Erman, D.; Hankin, D.; Healey, M.; Kondolf, M.; Mengel, M.; Mohr, M.; Moyle, P.; Nielsen, Jennifer; Speed, T.; Williams, J.

    1996-01-01

    Several years ago, Science published an important essay (Ludwig et al. 1993) on the need to confront the scientific uncertainty associated with managing natural resources. The essay did not discuss instream flow standards explicitly, but its arguments apply. At an April 1995 workshop in Davis, California, all 12 participants agreed that currently no scientifically defensible method exists for defining the instream flows needed to protect particular species of fish or aquatic ecosystems (Williams, in press). We also agreed that acknowledging this fact is an essential step in dealing rationally and effectively with the problem.Practical necessity and the protection of fishery resources require that new instream flow standards be established and that existing standards be revised. However, if standards cannot be defined scientifically, how can this be done? We join others in recommending the approach of adaptive management. Applied to instream flow standards, this approach involves at least three elements.

  10. Multiplatform application for calculating a combined standard uncertainty using a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niewinski, Marek; Gurnecki, Pawel

    2016-12-01

    The paper presents a new computer program for calculating a combined standard uncertainty. It implements the algorithm described in JCGM 101:20081 which is concerned with the use of a Monte Carlo method as an implementation of the propagation of distributions for uncertainty evaluation. The accuracy of the calculation has been obtained by using the high quality random number generators. The paper describes the main principles of the program and compares the obtained result with example problems presented in JCGM Supplement 1.

  11. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  12. Decision Making under Uncertainty: A Quasimetric Approach

    PubMed Central

    N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques

    2013-01-01

    We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches. PMID:24376697

  13. Unscented transform-based uncertainty analysis of rotating coil transducers for field mapping

    NASA Astrophysics Data System (ADS)

    Arpaia, P.; De Matteis, E.; Schiano Lo Moriello, R.

    2016-03-01

    The uncertainty of a rotating coil transducer for magnetic field mapping is analyzed. Unscented transform and statistical design of experiments are combined to determine magnetic field expectation, standard uncertainty, and separate contributions of the uncertainty sources. For nonlinear measurement models, the unscented transform-based approach is more error-proof than the linearization underlying the "Guide to the expression of Uncertainty in Measurements" (GUMs), owing to the absence of model approximations and derivatives computation. When GUM assumptions are not met, the deterministic sampling strategy strongly reduces computational burden with respect to Monte Carlo-based methods proposed by the Supplement 1 of the GUM. Furthermore, the design of experiments and the associated statistical analysis allow the uncertainty sources domain to be explored efficiently, as well as their significance and single contributions to be assessed for an effective setup configuration. A straightforward experimental case study highlights that a one-order-of-magnitude reduction in the relative uncertainty of the coil area produces a decrease in uncertainty of the field mapping transducer by a factor of 25 with respect to the worst condition. Moreover, about 700 trials and the related processing achieve results corresponding to 5 × 106 brute-force Monte Carlo simulations.

  14. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    SciTech Connect

    Yankov, Artem; Collins, Benjamin; Klein, Markus; Jessee, Matthew A.; Zwermann, Winfried; Velkov, Kiril; Pautz, Andreas; Downar, Thomas

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor and in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.

  15. A stochastic collocation method for uncertainty quantification and propagation in cardiovascular simulations.

    PubMed

    Sankaran, Sethuraman; Marsden, Alison L

    2011-03-01

    Simulations of blood flow in both healthy and diseased vascular models can be used to compute a range of hemodynamic parameters including velocities, time varying wall shear stress, pressure drops, and energy losses. The confidence in the data output from cardiovascular simulations depends directly on our level of certainty in simulation input parameters. In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probability density functions (PDFs). The objective is to systemically model the input uncertainties and quantify the confidence in the output of hemodynamic simulations. Input uncertainties are quantified and mapped to the stochastic space using the stochastic collocation technique. We develop an adaptive collocation algorithm for Gauss-Lobatto-Chebyshev grid points that significantly reduces computational cost. This analysis is performed on two idealized problems--an abdominal aortic aneurysm and a carotid artery bifurcation, and one patient specific problem--a Fontan procedure for congenital heart defects. In each case, relevant hemodynamic features are extracted and their uncertainty is quantified. Uncertainty quantification of the hemodynamic simulations is done using (a) stochastic space representations, (b) PDFs, and (c) the confidence intervals for a specified level of confidence in each problem.

  16. A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts

    NASA Astrophysics Data System (ADS)

    Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel

    2016-04-01

    Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.

  17. Monte Carlo uncertainty estimation for an oscillating-vessel viscosity measurement

    SciTech Connect

    K. Horne; H. Ban; R. Fielding; R. Kennedy

    2012-08-01

    This paper discusses the initial design and evaluation of a high temperature viscosity measurement system with the focus on the uncertainty assessment. Numerical simulation of the viscometer is used to estimate viscosity uncertainties through the Monte Carlo method. The simulation computes the system response for a particular set of inputs (viscosity, moment of inertia, spring constant and hysteretic damping), and the viscosity is calculated using two methods: the Roscoe approximate solution and a numerical-fit method. For numerical fitting, a residual function of the logarithmic decay of oscillation amplitude and oscillation period is developed to replace the residual function of angular oscillation, which is mathematically stiff. The results of this study indicate that the method using computational solution of the equations and fitting for the parameters should be used, since it almost always out-performs the Roscoe approximation in uncertainty. The hysteretic damping and spring stiffness uncertainties translate into viscosity uncertainties almost directly, whereas the moment of inertial and vessel-height uncertainties are magnified approximately two-fold. As the hysteretic damping increases, so does the magnification of its uncertainty, therefore it should be minimized in the system design. The result of this study provides a general guide for the design and application of all oscillation-vessel viscosity measurement systems.

  18. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  19. Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Mundarain, D.

    1993-01-01

    We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.

  20. Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.

    SciTech Connect

    Weirs, V. Gregory

    2014-03-01

    This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.

  1. Capturing the uncertainty in adversary attack simulations.

    SciTech Connect

    Darby, John L.; Brooks, Traci N.; Berry, Robert Bruce

    2008-09-01

    This work provides a comprehensive uncertainty technique to evaluate uncertainty, resulting in a more realistic evaluation of PI, thereby requiring fewer resources to address scenarios and allowing resources to be used across more scenarios. For a given set of dversary resources, two types of uncertainty are associated with PI for a scenario: (1) aleatory (random) uncertainty for detection probabilities and time delays and (2) epistemic (state of knowledge) uncertainty for the adversary resources applied during an attack. Adversary esources consist of attributes (such as equipment and training) and knowledge about the security system; to date, most evaluations have assumed an adversary with very high resources, adding to the conservatism in the evaluation of PI. The aleatory uncertainty in PI is ddressed by assigning probability distributions to detection probabilities and time delays. A numerical sampling technique is used to evaluate PI, addressing the repeated variable dependence in the equation for PI.

  2. Assessment of SFR Wire Wrap Simulation Uncertainties

    SciTech Connect

    Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David; Swiler, Laura P.

    2016-09-30

    Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility.

  3. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  4. Evacuation decision-making: process and uncertainty

    SciTech Connect

    Mileti, D.; Sorensen, J.; Bogard, W.

    1985-09-01

    The purpose was to describe the processes of evacuation decision-making, identify and document uncertainties in that process and discuss implications for federal assumption of liability for precautionary evacuations at nuclear facilities under the Price-Anderson Act. Four major categories of uncertainty are identified concerning the interpretation of hazard, communication problems, perceived impacts of evacuation decisions and exogenous influences. Over 40 historical accounts are reviewed and cases of these uncertainties are documented. The major findings are that all levels of government, including federal agencies experience uncertainties in some evacuation situations. Second, private sector organizations are subject to uncertainties at a variety of decision points. Third, uncertainties documented in the historical record have provided the grounds for liability although few legal actions have ensued. Finally it is concluded that if liability for evacuations is assumed by the federal government, the concept of a ''precautionary'' evacuation is not useful in establishing criteria for that assumption. 55 refs., 1 fig., 4 tabs.

  5. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  6. Entropic uncertainty relations and their applications

    NASA Astrophysics Data System (ADS)

    Coles, Patrick J.; Berta, Mario; Tomamichel, Marco; Wehner, Stephanie

    2017-01-01

    Heisenberg's uncertainty principle forms a fundamental element of quantum mechanics. Uncertainty relations in terms of entropies were initially proposed to deal with conceptual shortcomings in the original formulation of the uncertainty principle and, hence, play an important role in quantum foundations. More recently, entropic uncertainty relations have emerged as the central ingredient in the security analysis of almost all quantum cryptographic protocols, such as quantum key distribution and two-party quantum cryptography. This review surveys entropic uncertainty relations that capture Heisenberg's idea that the results of incompatible measurements are impossible to predict, covering both finite- and infinite-dimensional measurements. These ideas are then extended to incorporate quantum correlations between the observed object and its environment, allowing for a variety of recent, more general formulations of the uncertainty principle. Finally, various applications are discussed, ranging from entanglement witnessing to wave-particle duality to quantum cryptography.

  7. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    SciTech Connect

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip

    2015-04-15

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.

  8. Incorporating Forecast Uncertainty in Utility Control Center

    SciTech Connect

    Makarov, Yuri V.; Etingov, Pavel V.; Ma, Jian

    2014-07-09

    Uncertainties in forecasting the output of intermittent resources such as wind and solar generation, as well as system loads are not adequately reflected in existing industry-grade tools used for transmission system management, generation commitment, dispatch and market operation. There are other sources of uncertainty such as uninstructed deviations of conventional generators from their dispatch set points, generator forced outages and failures to start up, load drops, losses of major transmission facilities and frequency variation. These uncertainties can cause deviations from the system balance, which sometimes require inefficient and costly last minute solutions in the near real-time timeframe. This Chapter considers sources of uncertainty and variability, overall system uncertainty model, a possible plan for transition from deterministic to probabilistic methods in planning and operations, and two examples of uncertainty-based fools for grid operations.This chapter is based on work conducted at the Pacific Northwest National Laboratory (PNNL)

  9. Whitepaper on Uncertainty Quantification for MPACT

    SciTech Connect

    Williams, Mark L.

    2015-12-17

    The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However, MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.

  10. Uncertainty quantification of effective nuclear interactions

    SciTech Connect

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    2016-03-02

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  11. Calculation of Measurement Uncertainty Using Prior Information

    PubMed Central

    Phillips, S. D.; Estler, W. T.; Levenson, M. S.; Eberhardt, K. R.

    1998-01-01

    We describe the use of Bayesian inference to include prior information about the value of the measurand in the calculation of measurement uncertainty. Typical examples show this can, in effect, reduce the expanded uncertainty by up to 85 %. The application of the Bayesian approach to proving workpiece conformance to specification (as given by international standard ISO 14253-1) is presented and a procedure for increasing the conformance zone by modifying the expanded uncertainty guard bands is discussed. PMID:28009370

  12. A Strategy for Uncertainty Visualization Design

    DTIC Science & Technology

    2009-10-01

    clearer understanding. As an example application of the UVDS, it is applied to current research regarding uncertainty visualization for the Canadian...the design a clearer understanding. As an example application of the UVDS, it is applied to current research regarding uncertainty visualization...TMs) aimed at creating foundational documents on the topic of uncertainty visualization which can be used in defence applications . In addition, the

  13. UK coastal flood risk; understanding the uncertainty

    NASA Astrophysics Data System (ADS)

    Lewis, Matt; Bates, Paul; Horsburgh, Kevin; Smith, Ros

    2010-05-01

    The sensitivity of flood risk mapping to the major sources of future climate uncertainty were investigated by propagating these uncertainties through a LISFLOOD inundation model of a significant flood event of the North Somerset coast, to the west of the UK. The largest source of uncertainty was found to be the effect of the global Mean Sea Level rise range of 18-59cm (as reported by the Intergovernmental Panel on Climate Change), with an approximate upper limit of 1m, by 2100. Therefore, MSL rise uncertainty needs to be quantified in future flood risk predictions. However, the uncertainty of the storm tide height along the coastline (i.e. the maximum water-level at the coast excluding wave effects) was found to significantly affect our results. Our evidence suggests that the current flood mapping approach of forcing the inundation model with an extreme water-level of constant return period is incorrect. We present a new technique which is based on the spatial characteristics of real events. This provides a more reliable spatial treatment of the storm tide uncertainty. The uncertainty of land roughness coefficients (0.018-0.09 for the study area, depending upon land use), used within the inundation model to control flood wave propagation, was found to affect inundation extents especially for larger inundation events. However, the sensitivity to roughness uncertainty was found to be much smaller than other factors, such as Mean Sea Level rise uncertainty. We present the results of propagating these uncertainties through an inundation model and develop probabilistic techniques to quantify these sources of future flood risk uncertainty. Keywords: future flood risk, uncertainty, inundation, LISFLOOD, sea-level rise, climate change

  14. Dealing with uncertainties in angles-only initial orbit determination

    NASA Astrophysics Data System (ADS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2016-08-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map uncertainties from the observation space to the state space. When a minimum set of observations is available, DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available, high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  15. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  16. “Stringy” coherent states inspired by generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir; Roy, Pinaki

    2012-05-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  17. Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification

    SciTech Connect

    Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou; Pu, Yunchen; Gan, Zhe; Carin, Lawrence

    2016-08-10

    Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D & 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.

  18. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  19. Uncertainty propagation within the UNEDF models

    NASA Astrophysics Data System (ADS)

    Haverinen, T.; Kortelainen, M.

    2017-04-01

    The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties of binding energies, proton quadrupole moments and proton matter radius for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.

  20. Uncertainty relations for general unitary operators

    NASA Astrophysics Data System (ADS)

    Bagchi, Shrobona; Pati, Arun Kumar

    2016-10-01

    We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of a Hilbert space. We show that our bounds are tighter in various cases than the ones existing in the current literature. Using the uncertainty relation for the unitary operators, we obtain the tight state-independent lower bound for the uncertainty of two Pauli observables and anticommuting observables in higher dimensions. With regard to the minimum-uncertainty states, we derive the minimum-uncertainty state equation by the analytic method and relate this to the ground-state problem of the Harper Hamiltonian. Furthermore, the higher-dimensional limit of the uncertainty relations and minimum-uncertainty states are explored. From an operational point of view, we show that the uncertainty in the unitary operator is directly related to the visibility of quantum interference in an interferometer where one arm of the interferometer is affected by a unitary operator. This shows a principle of preparation uncertainty, i.e., for any quantum system, the amount of visibility for two general noncommuting unitary operators is nontrivially upper bounded.

  1. Finite Frames and Graph Theoretic Uncertainty Principles

    NASA Astrophysics Data System (ADS)

    Koprowski, Paul J.

    The subject of analytical uncertainty principles is an important field within harmonic analysis, quantum physics, and electrical engineering. We explore uncertainty principles in the context of the graph Fourier transform, and we prove additive results analogous to the multiplicative version of the classical uncertainty principle. We establish additive uncertainty principles for finite Parseval frames. Lastly, we examine the feasibility region of simultaneous values of the norms of a graph differential operator acting on a function f ∈ l2(G) and its graph Fourier transform.

  2. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  3. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  4. A Multi-Band Uncertainty Set Based Robust SCUC With Spatial and Temporal Budget Constraints

    SciTech Connect

    Dai, Chenxi; Wu, Lei; Wu, Hongyu

    2016-11-01

    The dramatic increase of renewable energy resources in recent years, together with the long-existing load forecast errors and increasingly involved price sensitive demands, has introduced significant uncertainties into power systems operation. In order to guarantee the operational security of power systems with such uncertainties, robust optimization has been extensively studied in security-constrained unit commitment (SCUC) problems, for immunizing the system against worst uncertainty realizations. However, traditional robust SCUC models with single-band uncertainty sets may yield over-conservative solutions in most cases. This paper proposes a multi-band robust model to accurately formulate various uncertainties with higher resolution. By properly tuning band intervals and weight coefficients of individual bands, the proposed multi-band robust model can rigorously and realistically reflect spatial/temporal relationships and asymmetric characteristics of various uncertainties, and in turn could effectively leverage the tradeoff between robustness and economics of robust SCUC solutions. The proposed multi-band robust SCUC model is solved by Benders decomposition (BD) and outer approximation (OA), while taking the advantage of integral property of the proposed multi-band uncertainty set. In addition, several accelerating techniques are developed for enhancing the computational performance and the convergence speed. Numerical studies on a 6-bus system and the modified IEEE 118-bus system verify the effectiveness of the proposed robust SCUC approach for enhancing uncertainty modeling capabilities and mitigating conservativeness of the robust SCUC solution.

  5. Uncertainty characterization in the retrieval of an atmospheric point release

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Turbelin, Grégory; Rani, Raj

    2017-03-01

    The study proposes a methodology in a recent inversion technique, called as Renormalization, to characterize the uncertainties in the reconstruction of a point source. The estimates are derived for measuring the inversion error, the degree of model fit towards measurements (model determination coefficient) and the confidence intervals for the retrieved point source parameters (mainly, location and strength). The inversion error is reflected through an angular estimate which measures the deviation between the measured and predicted concentrations. The uncertainty estimation methodology is evaluated for point source reconstruction studies, using real measurements from two field experiments, known as Fusion Field Trials 2007 (FFT07) in flat terrain and Mock Urban Setting Test (MUST) in urban like terrain. In FFT07 and MUST experiments, the point source location is retrieved with an average Euclidean distance of 22 m and 15 m respectively. The source strength is retrieved, on average, within a factor of 1.5 in both the datasets. The inversion error is observed as 24o and 21o in FFT07 and MUST experiment, respectively. The 95% confidence interval estimates show that the uncertainty in the retrieved parameters is relatively large in approximately 50% FFT07 and 30% MUST trials in spite of their closeness towards true source parameters. For a comparative analysis, the interval estimates are also compared with a more general method of uncertainty estimation, Residual Bootstrap Sampling. In most of the trials, we observed that the intervals estimates with the present method are comparable (within 10-20% variations) to bootstrap estimates. The proposed methodology provides near accurate and computationally efficient uncertainty estimates in comparison to the methods based on Hessian and sampling procedures.

  6. A Critical Appraisal of Uncertainty Challenges in Climate Change (Invited)

    NASA Astrophysics Data System (ADS)

    Ghanem, R.

    2010-12-01

    Climate characterization, let alone prediction if fraught with technological and scientific challenges. While some of the are procedural, others are fundamental. The net effect of these challenges is the introduction of uncertainty at many juncture throughout the process of climate prediction. A quantification of this uncertainty is paramount for the confident design of effective mitigation actions, specially as these pertain to the dynamics of socio-economic systems intricately coupled to the future climate. Uncertainties are compounded as the chain of custody on evidence is transferred from a sparsely observed reality through a series of actions where assumptions are indiscriminately applied. These actions include 1) modeling of both instruments and 2) physical reality (which includes both choice of reality to model and mathematics by which to model it), 3) assimilation of data through those models in accordance to specific criteria, 4) choice of probabilistic models by which to parametrize the weight of evidence, and 5) statistical assumptions associated with the deterministic propagation of this parametrization, 6) the synthesis of reduced models that are adapted to such constraints as computational resources and paucity of data. In climate modeling, this list is further complicated by the necessity or practice of having recourse to an ensemble of models. This talk will interpret current concepts, methods, tools and practice used in uncertainty quantification (UQ) for climate prediction in light of recent UQ developments in other domains of science and engineering. Issues of uncertainty identification, characterization, propagation, and management are addressed. The talk will highlight the requirements that must be met in order to certify and validate various statements associated with climate change science. These requirements can be viewed as drivers for innovation in science and technology. These required innovations will be also described.

  7. Solving navigational uncertainty using grid cells on robots.

    PubMed

    Milford, Michael J; Wiles, Janet; Wyeth, Gordon F

    2010-11-11

    To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our

  8. Evolution Time and Energy Uncertainty

    ERIC Educational Resources Information Center

    Boykin, Timothy B.; Kharche, Neerav; Klimeck, Gerhard

    2007-01-01

    Often one needs to calculate the evolution time of a state under a Hamiltonian with no explicit time dependence when only numerical methods are available. In cases such as this, the usual application of Fermi's golden rule and first-order perturbation theory is inadequate as well as being computationally inconvenient. Instead, what one needs are…

  9. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    SciTech Connect

    Marzouk, Youssef M.; Coles, T.; Spantini, A.; Tosatto, L.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local and long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing

  10. Satellite Re-entry Modeling and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Horsley, M.

    2012-09-01

    LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty

  11. Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty

    SciTech Connect

    Porter, D.W.

    1996-04-01

    Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information from non-invasive and minimally invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the author has developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further applications include an army depot at Letterkenney, PA and commercial industrial sites.

  12. Uncertainty reasoning in expert systems

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1993-01-01

    Intelligent control is a very successful way to transform the expert's knowledge of the type 'if the velocity is big and the distance from the object is small, hit the brakes and decelerate as fast as possible' into an actual control. To apply this transformation, one must choose appropriate methods for reasoning with uncertainty, i.e., one must: (1) choose the representation for words like 'small', 'big'; (2) choose operations corresponding to 'and' and 'or'; (3) choose a method that transforms the resulting uncertain control recommendations into a precise control strategy. The wrong choice can drastically affect the quality of the resulting control, so the problem of choosing the right procedure is very important. From a mathematical viewpoint these choice problems correspond to non-linear optimization and are therefore extremely difficult. In this project, a new mathematical formalism (based on group theory) is developed that allows us to solve the problem of optimal choice and thus: (1) explain why the existing choices are really the best (in some situations); (2) explain a rather mysterious fact that fuzzy control (i.e., control based on the experts' knowledge) is often better than the control by these same experts; and (3) give choice recommendations for the cases when traditional choices do not work.

  13. The Species Delimitation Uncertainty Principle

    PubMed Central

    Adams, Byron J.

    2001-01-01

    If, as Einstein said, "it is the theory which decides what we can observe," then "the species problem" could be solved by simply improving our theoretical definition of what a species is. However, because delimiting species entails predicting the historical fate of evolutionary lineages, species appear to behave according to the Heisenberg Uncertainty Principle, which states that the most philosophically satisfying definitions of species are the least operational, and as species concepts are modified to become more operational they tend to lose their philosophical integrity. Can species be delimited operationally without losing their philosophical rigor? To mitigate the contingent properties of species that tend to make them difficult for us to delimit, I advocate a set of operations that takes into account the prospective nature of delimiting species. Given the fundamental role of species in studies of evolution and biodiversity, I also suggest that species delimitation proceed within the context of explicit hypothesis testing, like other scientific endeavors. The real challenge is not so much the inherent fallibility of predicting the future but rather adequately sampling and interpreting the evidence available to us in the present. PMID:19265874

  14. Communicating Storm Surge Forecast Uncertainty

    NASA Astrophysics Data System (ADS)

    Troutman, J. A.; Rhome, J.

    2015-12-01

    When it comes to tropical cyclones, storm surge is often the greatest threat to life and property along the coastal United States. The coastal population density has dramatically increased over the past 20 years, putting more people at risk. Informing emergency managers, decision-makers and the public about the potential for wind driven storm surge, however, has been extremely difficult. Recently, the Storm Surge Unit at the National Hurricane Center in Miami, Florida has developed a prototype experimental storm surge watch/warning graphic to help communicate this threat more effectively by identifying areas most at risk for life-threatening storm surge. This prototype is the initial step in the transition toward a NWS storm surge watch/warning system and highlights the inundation levels that have a 10% chance of being exceeded. The guidance for this product is the Probabilistic Hurricane Storm Surge (P-Surge) model, which predicts the probability of various storm surge heights by statistically evaluating numerous SLOSH model simulations. Questions remain, however, if exceedance values in addition to the 10% may be of equal importance to forecasters. P-Surge data from 2014 Hurricane Arthur is used to ascertain the practicality of incorporating other exceedance data into storm surge forecasts. Extracting forecast uncertainty information through analyzing P-surge exceedances overlaid with track and wind intensity forecasts proves to be beneficial for forecasters and decision support.

  15. On the dominant uncertainty source of climate change projections at the local scale

    NASA Astrophysics Data System (ADS)

    Fatichi, Simone; Ivanov, Valeriy; Paschalis, Athanasios; Molnar, Peter; Rimkus, Stefan; Kim, Jongho; Peleg, Nadav; Burlando, Paolo; Caporali, Enrica

    2016-04-01

    Decision makers and stakeholders are usually concerned about climate change projections at local spatial scales and fine temporal resolutions. This contrasts with the reliability of climate models, which is typically higher at the global and regional scales, Therefore, there is a demand for advanced methodologies that offer the capability of transferring predictions of climate models and relative uncertainty to scales commensurate with practical applications and for higher order statistics (e.g., few square kilometres and sub-daily scale). A stochastic downscaling technique that makes use of an hourly weather generator (AWE-GEN) and of a Bayesian methodology to weight realizations from different climate models is used to generate local scale meteorological time series of plausible "futures". We computed factors of change from realizations of 32 climate models used in the Coupled Model Intercomparison Project Phase 5 (CMIP5) and for different emission scenarios (RCP 4.5 and RCP 8.5). Future climate projections for several meteorological variables (precipitation, air temperature, relative humidity, shortwave radiation) are simulated at three locations characterized by remarkably different climates, Zurich (Switzlerand), Miami and San Francisco (USA). The methodology is designed to partition three main sources of uncertainty: uncertainty due to climate models (model epistemic uncertainty), anthropogenic forcings (scenario uncertainty), and internal climate variability (stochastic uncertainty). The three types of uncertainty sources are considered as dependent, implicitly accounting for possible co-variances among the sources. For air temperature, the magnitude of the different uncertainty sources is comparable for mid-of-the-century projections, while scenario uncertainty dominates at large lead-times. The dominant source of uncertainty for changes in precipitation mean and extremes is internal climate variability, which is accounting for more than 80% of the total

  16. PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility

    SciTech Connect

    Piyush Sabharwall; Richard Skifton; Carl Stoots; Eung Soo Kim; Thomas Conder

    2013-12-01

    Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart of any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.

  17. Uncertainty propagation for nonlinear vibrations: A non-intrusive approach

    NASA Astrophysics Data System (ADS)

    Panunzio, A. M.; Salles, Loic; Schwingshackl, C. W.

    2017-02-01

    The propagation of uncertain input parameters in a linear dynamic analysis is reasonably well established today, but with the focus of the dynamic analysis shifting towards nonlinear systems, new approaches is required to compute the uncertain nonlinear responses. A combination of stochastic methods (Polynomial Chaos Expansion, PCE) with an Asymptotic Numerical Method (ANM) for the solution of the nonlinear dynamic systems is presented to predict the propagation of random input uncertainties and assess their influence on the nonlinear vibrational behaviour of a system. The proposed method allows the computation of stochastic resonance frequencies and peak amplitudes based on multiple input uncertainties, leading to a series of uncertain nonlinear dynamic responses. One of the main challenges when using the PCE is thereby the Gibbs phenomenon, which can heavily impact the resulting stochastic nonlinear response by introducing spurious oscillations. A novel technique to avoid the Gibbs phenomenon is be presented in this paper, leading to high quality frequency response predictions. A comparison of the proposed stochastic nonlinear analysis technique to traditional Monte Carlo simulations, demonstrates comparable accuracy at a significantly reduced computational cost, thereby validating the proposed approach.

  18. The detectability of brown dwarfs - Predictions and uncertainties

    NASA Technical Reports Server (NTRS)

    Nelson, L. A.; Rappaport, S.; Joss, P. C.

    1993-01-01

    In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.

  19. The need for model uncertainty analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  20. Assessment of Uncertainty-Infused Scientific Argumentation

    ERIC Educational Resources Information Center

    Lee, Hee-Sun; Liu, Ou Lydia; Pallant, Amy; Roohr, Katrina Crotts; Pryputniewicz, Sarah; Buck, Zoë E.

    2014-01-01

    Though addressing sources of uncertainty is an important part of doing science, it has largely been neglected in assessing students' scientific argumentation. In this study, we initially defined a scientific argumentation construct in four structural elements consisting of claim, justification, uncertainty qualifier, and uncertainty…

  1. Comparison of Classical and Quantum Mechanical Uncertainties.

    ERIC Educational Resources Information Center

    Peslak, John, Jr.

    1979-01-01

    Comparisons are made for the particle-in-a-box, the harmonic oscillator, and the one-electron atom. A classical uncertainty principle is derived and compared with its quantum-mechanical counterpart. The results are discussed in terms of the statistical interpretation of the uncertainty principle. (Author/BB)

  2. Uncertainties in the JPL planetary ephemeris

    NASA Astrophysics Data System (ADS)

    Folkner, W. M.

    2011-10-01

    The numerically integrated planetary ephemerides by JPL, IMCCE, and IPA are largely based on the same observation set and dynamical models. The differences between ephemerides are expected to be consistent within uncertainties. Uncertainties in the orbits of the major planets and the dwarf planet Pluto based on recent analysis at JPL are described.

  3. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    ERIC Educational Resources Information Center

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  4. Methods of Dealing with Uncertainty: Panel Presentation.

    ERIC Educational Resources Information Center

    Pearce, Frank C.

    Rising energy costs, changing tax bases, increasing numbers of non-traditional students, and ever changing educational technology point to the fact that community college administrators will have to accept uncertainty as a normal planning component. Rather than ignoring uncertainty, a tendency that was evidenced in the reluctance of administrators…

  5. Disturbance, the uncertainty principle and quantum optics

    NASA Technical Reports Server (NTRS)

    Martens, Hans; Demuynck, Willem M.

    1993-01-01

    It is shown how a disturbance-type uncertainty principle can be derived from an uncertainty principle for joint measurements. To achieve this, we first clarify the meaning of 'inaccuracy' and 'disturbance' in quantum mechanical measurements. The case of photon number and phase is treated as an example, and it is applied to a quantum non-demolition measurement using the optical Kerr effect.

  6. Curriculum in Art Education: The Uncertainty Principle.

    ERIC Educational Resources Information Center

    Sullivan, Graeme

    1989-01-01

    Identifies curriculum as the pivotal link between theory and practice, noting that all stages of curriculum research and development are characterized by elements of uncertainty. States that this uncertainty principle reflects the reality of practice as it mirrors the contradictory nature of art, the pluralism of schools and society, and the…

  7. Gamma-Ray Telescope and Uncertainty Principle

    ERIC Educational Resources Information Center

    Shivalingaswamy, T.; Kagali, B. A.

    2012-01-01

    Heisenberg's Uncertainty Principle is one of the important basic principles of quantum mechanics. In most of the books on quantum mechanics, this uncertainty principle is generally illustrated with the help of a gamma ray microscope, wherein neither the image formation criterion nor the lens properties are taken into account. Thus a better…

  8. Estimating the uncertainty in underresolved nonlinear dynamics

    SciTech Connect

    Chorin, Alelxandre; Hald, Ole

    2013-06-12

    The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.

  9. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  10. Generalized Entropic Uncertainty Relations with Tsallis' Entropy

    NASA Technical Reports Server (NTRS)

    Portesi, M.; Plastino, A.

    1996-01-01

    A generalization of the entropic formulation of the Uncertainty Principle of Quantum Mechanics is considered with the introduction of the q-entropies recently proposed by Tsallis. The concomitant generalized measure is illustrated for the case of phase and number operators in quantum optics. Interesting results are obtained when making use of q-entropies as the basis for constructing generalized entropic uncertainty measures.

  11. Identifying uncertainties in Arctic climate change projections

    NASA Astrophysics Data System (ADS)

    Hodson, Daniel L. R.; Keeley, Sarah P. E.; West, Alex; Ridley, Jeff; Hawkins, Ed; Hewitt, Helene T.

    2013-06-01

    Wide ranging climate changes are expected in the Arctic by the end of the 21st century, but projections of the size of these changes vary widely across current global climate models. This variation represents a large source of uncertainty in our understanding of the evolution of Arctic climate. Here we systematically quantify and assess the model uncertainty in Arctic climate changes in two CO2 doubling experiments: a multimodel ensemble (CMIP3) and an ensemble constructed using a single model (HadCM3) with multiple parameter perturbations (THC-QUMP). These two ensembles allow us to assess the contribution that both structural and parameter variations across models make to the total uncertainty and to begin to attribute sources of uncertainty in projected changes. We find that parameter uncertainty is an major source of uncertainty in certain aspects of Arctic climate. But also that uncertainties in the mean climate state in the 20th century, most notably in the northward Atlantic ocean heat transport and Arctic sea ice volume, are a significant source of uncertainty for projections of future Arctic change. We suggest that better observational constraints on these quantities will lead to significant improvements in the precision of projections of future Arctic climate change.

  12. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  13. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  14. Nonclassicality in phase-number uncertainty relations

    SciTech Connect

    Matia-Hernando, Paloma; Luis, Alfredo

    2011-12-15

    We show that there are nonclassical states with lesser joint fluctuations of phase and number than any classical state. This is rather paradoxical since one would expect classical coherent states to be always of minimum uncertainty. The same result is obtained when we replace phase by a phase-dependent field quadrature. Number and phase uncertainties are assessed using variance and Holevo relation.

  15. The Stock Market: Risk vs. Uncertainty.

    ERIC Educational Resources Information Center

    Griffitts, Dawn

    2002-01-01

    This economics education publication focuses on the U.S. stock market and the risk and uncertainty that an individual faces when investing in the market. The material explains that risk and uncertainty relate to the same underlying concept randomness. It defines and discusses both concepts and notes that although risk is quantifiable, uncertainty…

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  17. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  18. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  19. Quantification of uncertainty sources in a 2D hydraulic model for the river Rhine using expert opinions

    NASA Astrophysics Data System (ADS)

    Warmink, J. J.; van der Klis, H.; Booij, M. J.; Hulscher, S. J. M. H.

    2009-04-01

    Hydrodynamic river models are applied to design and evaluate measures for purposes such as safety against flooding. These numerical models are all based on a deterministic approach. However, the modeling of river processes involves numerous uncertainties, resulting in uncertain model results. Uncertainty is defined as any deviation from the unachievable ideal of complete determinism. Uncertainty in models comprises (1) the difference between a model outcome and a measurement and (2) the possible variation around the computed value or measurements. Knowledge of the type and magnitude of these uncertainties is crucial for a meaningful interpretation of the model results. The aim of this study is to identify the sources of uncertainty that induce the largest uncertainties in the model outcomes and quantify this uncertainty using expert opinions. In this study, the two-dimensional WAQUA model for the Dutch river Rh