Science.gov

Sample records for avt-147 computational uncertainty

  1. Technical Evaluation Report for Symposium AVT-147: Computational Uncertainty in Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Radespiel, Rolf; Hemsch, Michael J.

    2007-01-01

    The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.

  2. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.

  3. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.

  4. Credible Computations: Standard and Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties

  5. Numerical uncertainty in computational engineering and physics

    SciTech Connect

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

  6. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  7. Uncertainty and error in computational simulations

    SciTech Connect

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  8. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  9. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  10. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  11. Applying uncertainty quantification to multiphase flow computational fluid dynamics

    SciTech Connect

    Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C

    2013-07-01

    Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.

  12. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  13. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  14. Computing uncertainties in ionosphere-airglow models: I. Electron flux and species production uncertainties for Mars

    NASA Astrophysics Data System (ADS)

    Gronoff, Guillaume; Simon Wedlund, Cyril; Mertens, Christopher J.; Lillis, Robert J.

    2012-04-01

    The ionization and excitation of atoms and molecules in the upper atmospheres of the Earth and planets are computed by a number of physical models. From these calculations, quantities measurable by dedicated satellite experiments such as airglow and electron fluxes can be derived. It is then possible to compare model and observation to derive more fundamental physical properties of the upper atmospheres, for example, the density as a function of altitude. To ensure the accuracy of these retrieval techniques, it is important to have an estimation of the uncertainty of these models and to have ways to account for these uncertainties. The complexity of kinetic models for computing the secondary production of excited state species (including ions) makes it a difficult evaluation, and studies usually neglect or underestimate it. We present here a Monte-Carlo approach to the computation of model uncertainties. As an example, we studied several aspects of the model uncertainties in the upper atmosphere of Mars, including the computed secondary electron flux and the production of the main ion species. Our simulations show the importance of improving solar flux models, especially on the energy binning and on the photon impact cross sections, which are the main sources of uncertainties on the dayside. The risk of modifying cross sections on the basis of aeronomical observations is highlighted for the case of Mars, while accurate uncertainties are shown to be crucial for the interpretation of data from the particle detectors onboard Mars Global Surveyor. Finally, it shows the importance of AtMoCiad, a public database dedicated to the evaluation of aeronomy cross section uncertainties. A detailed study of the resulting emissions cross sections uncertainties is the focus of a forthcoming paper (Gronoff et al., 2012) in which the outputs discussed in the present paper are used to compute airglow uncertainty, and the overall result is compared with the data from the SPICAM UV

  15. Uncertainty and Intelligence in Computational Stochastic Mechanics

    NASA Technical Reports Server (NTRS)

    Ayyub, Bilal M.

    1996-01-01

    Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should

  16. Quantified PIRT and Uncertainty Quantification for Computer Code Validation

    NASA Astrophysics Data System (ADS)

    Luo, Hu

    This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (pi groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals.

  17. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  18. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  19. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  20. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    ERIC Educational Resources Information Center

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  1. Uncertainty analysis for computer model projections of hurricane losses.

    PubMed

    Iman, Ronald L; Johnson, Mark E; Watson, Charles C

    2005-10-01

    Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.

  2. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    NASA Astrophysics Data System (ADS)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2017-02-01

    This work deals with an extension of the reducedorder models (ROMs) that are classically constructed by modal analysis in linear structural dynamics for which the computational models are assumed to be uncertain. It is based on a multilevel projection strategy consisting in introducing three reduced-order bases that are obtained by using a spatial filtering methodology of local displacements. This filtering involves global shape functions for the kinetic energy. The proposed multilevel stochastic ROM is constructed by using the nonparametric probabilistic approach of uncertainties. It allows for affecting a specific level of uncertainties to each type of displacements associated with the corresponding vibration regime. The proposed methodology is applied to the computational model of an automobile structure, for which the multilevel stochastic ROM is identified with respect to experimental measurements. This identification is performed by solving a statistical inverse problem.

  3. Elicitation of natural language representations of uncertainty using computer technology

    SciTech Connect

    Tonn, B.; Goeltz, R.; Travis, C.; Tennessee Univ., Knoxville, TN )

    1989-01-01

    Knowledge elicitation is an important aspect of risk analysis. Knowledge about risks must be accurately elicited from experts for use in risk assessments. Knowledge and perceptions of risks must also be accurately elicited from the public in order to intelligently perform policy analysis and develop and implement programs. Oak Ridge National Laboratory is developing computer technology to effectively and efficiently elicit knowledge from experts and the public. This paper discusses software developed to elicit natural language representations of uncertainty. The software is written in Common Lisp and resides on VAX Computers System and Symbolics Lisp machines. The software has three goals, to determine preferences for using natural language terms for representing uncertainty; likelihood rankings of the terms; and how likelihood estimates are combined to form new terms. The first two goals relate to providing useful results for those interested in risk communication. The third relates to providing cognitive data to further our understanding of people's decision making under uncertainty. The software is used to elicit natural language terms used to express the likelihood of various agents causing cancer in humans and cancer resulting in various maladies, and the likelihood of everyday events. 6 refs., 4 figs., 4 tabs.

  4. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for

  5. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  6. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  7. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the

  8. Optimal allocation of computational resources in hydrogeological models under uncertainty

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.

    2015-09-01

    Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the

  9. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  10. Explaining Delusions: Reducing Uncertainty Through Basic and Computational Neuroscience.

    PubMed

    Feeney, Erin J; Groman, Stephanie M; Taylor, Jane R; Corlett, Philip R

    2017-03-01

    Delusions, the fixed false beliefs characteristic of psychotic illness, have long defied understanding despite their response to pharmacological treatments (e.g., D2 receptor antagonists). However, it can be challenging to discern what makes beliefs delusional compared with other unusual or erroneous beliefs. We suggest mapping the putative biology to clinical phenomenology with a cognitive psychology of belief, culminating in a teleological approach to beliefs and brain function supported by animal and computational models. We argue that organisms strive to minimize uncertainty about their future states by forming and maintaining a set of beliefs (about the organism and the world) that are robust, but flexible. If uncertainty is generated endogenously, beliefs begin to depart from consensual reality and can manifest into delusions. Central to this scheme is the notion that formal associative learning theory can provide an explanation for the development and persistence of delusions. Beliefs, in animals and humans, may be associations between representations (e.g., of cause and effect) that are formed by minimizing uncertainty via new learning and attentional allocation. Animal research has equipped us with a deep mechanistic basis of these processes, which is now being applied to delusions. This work offers the exciting possibility of completing revolutions of translation, from the bedside to the bench and back again. The more we learn about animal beliefs, the more we may be able to apply to human beliefs and their aberrations, enabling a deeper mechanistic understanding. © The Author 2017. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  12. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  13. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  14. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  15. Computations of uncertainty mediate acute stress responses in humans

    PubMed Central

    de Berker, Archy O.; Rutledge, Robb B.; Mathys, Christoph; Marshall, Louise; Cross, Gemma F.; Dolan, Raymond J.; Bestmann, Sven

    2016-01-01

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. PMID:27020312

  16. Computations of uncertainty mediate acute stress responses in humans.

    PubMed

    de Berker, Archy O; Rutledge, Robb B; Mathys, Christoph; Marshall, Louise; Cross, Gemma F; Dolan, Raymond J; Bestmann, Sven

    2016-03-29

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function.

  17. Uncertainty of mantle geophysical properties computed from phase equilibrium models

    NASA Astrophysics Data System (ADS)

    Connolly, J. A. D.; Khan, A.

    2016-05-01

    Phase equilibrium models are used routinely to predict geophysically relevant mantle properties. A limitation of this approach is that nonlinearity of the phase equilibrium problem precludes direct assessment of the resultant uncertainties. To overcome this obstacle, we stochastically assess uncertainties along self-consistent mantle adiabats for pyrolitic and basaltic bulk compositions to 2000 km depth. The dominant components of the uncertainty are the identity, composition and elastic properties of the minerals. For P wave speed and density, the latter components vary little, whereas the first is confined to the upper mantle. Consequently, P wave speeds, densities, and adiabatic temperatures and pressures predicted by phase equilibrium models are more uncertain in the upper mantle than in the lower mantle. In contrast, uncertainties in S wave speeds are dominated by the uncertainty in shear moduli and are approximately constant throughout the model depth range.

  18. Fast Computation of Hemodynamic Sensitivity to Lumen Segmentation Uncertainty.

    PubMed

    Sankaran, Sethuraman; Grady, Leo; Taylor, Charles A

    2015-12-01

    Patient-specific blood flow modeling combining imaging data and computational fluid dynamics can aid in the assessment of coronary artery disease. Accurate coronary segmentation and realistic physiologic modeling of boundary conditions are important steps to ensure a high diagnostic performance. Segmentation of the coronary arteries can be constructed by a combination of automated algorithms with human review and editing. However, blood pressure and flow are not impacted equally by different local sections of the coronary artery tree. Focusing human review and editing towards regions that will most affect the subsequent simulations can significantly accelerate the review process. We define geometric sensitivity as the standard deviation in hemodynamics-derived metrics due to uncertainty in lumen segmentation. We develop a machine learning framework for estimating the geometric sensitivity in real time. Features used include geometric and clinical variables, and reduced-order models. We develop an anisotropic kernel regression method for assessment of lumen narrowing score, which is used as a feature in the machine learning algorithm. A multi-resolution sensitivity algorithm is introduced to hierarchically refine regions of high sensitivity so that we can quantify sensitivities to a desired spatial resolution. We show that the mean absolute error of the machine learning algorithm compared to 3D simulations is less than 0.01. We further demonstrate that sensitivity is not predicted simply by anatomic reduction but also encodes information about hemodynamics which in turn depends on downstream boundary conditions. This sensitivity approach can be extended to other systems such as cerebral flow, electro-mechanical simulations, etc.

  19. Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing

    SciTech Connect

    Datta, D.

    2010-10-26

    Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.

  20. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Spacecraft components may be damaged due to airflow produced by Environmental Control Systems (ECS). There are uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field around a spacecraft from the ECS System. This paper describes an approach to estimate the uncertainty in using CFD to predict the airflow speeds around an encapsulated spacecraft.

  1. Binding in light nuclei: Statistical NN uncertainties vs Computational accuracy

    NASA Astrophysics Data System (ADS)

    Navarro Pérez, R.; Nogga, A.; Amaro, J. E.; Ruiz Arriola, E.

    2016-08-01

    We analyse the impact of the statistical uncertainties of the the nucleon-nucleon interaction, based on the Granada-2013 np-pp database, on the binding energies of the triton and the alpha particle using a bootstrap method, by solving the Faddeev equations for 3H and the Yakubovsky equations for 4He respectively. We check that in practice about 30 samples prove enough for a reliable error estimate. An extrapolation of the well fulfilled Tjon-line correlation predicts the experimental binding of the alpha particle within uncertainties. Presented by RNP at Workshop for young scientists with research interests focused on physics at FAIR 14-19 February 2016 Garmisch-Partenkirchen (Germany).

  2. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  3. Computer simulations in room acoustics: concepts and uncertainties.

    PubMed

    Vorländer, Michael

    2013-03-01

    Geometrical acoustics are used as a standard model for room acoustic design and consulting. Research on room acoustic simulation focuses on a more accurate modeling of propagation effects such as diffraction and other wave effects in rooms, and on scattering. Much progress was made in this field so that wave models also (for example, the boundary element method and the finite differences in time domain) can now be used for higher frequencies. The concepts and implementations of room simulation methods are briefly reviewed. After all, simulations in architectural acoustics are indeed powerful tools, but their reliability depends on the skills of the operator who has to create an adequate polygon model and has to choose the correct input data of boundary conditions such as absorption and scattering. Very little is known about the uncertainty of this input data. With the theory of error propagation of uncertainties it can be shown that prediction of reverberation times with accuracy better than the just noticeable difference requires input data in a quality which is not available from reverberation room measurements.

  4. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.

    2013-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around

  5. Establishing performance requirements of computer based systems subject to uncertainty

    SciTech Connect

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  6. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions

  7. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics

  8. A computational framework for uncertainty quantification and stochastic optimization in unit commitment with wind power generation.

    SciTech Connect

    Constantinescu, E. M; Zavala, V. M.; Rocklin, M.; Lee, S.; Anitescu, M.

    2011-02-01

    We present a computational framework for integrating a state-of-the-art numerical weather prediction (NWP) model in stochastic unit commitment/economic dispatch formulations that account for wind power uncertainty. We first enhance the NWP model with an ensemble-based uncertainty quantification strategy implemented in a distributed-memory parallel computing architecture. We discuss computational issues arising in the implementation of the framework and validate the model using real wind-speed data obtained from a set of meteorological stations. We build a simulated power system to demonstrate the developments.

  9. A Computer Program to Evaluate Timber Production Investments Under Uncertainty

    Treesearch

    Dennis L. Schweitzer

    1968-01-01

    A computer program has been written in Fortran IV to calculate probability distributions of present worths of investments in timber production. Inputs can include both point and probabilistic estimates of future costs, prices, and yields. Distributions of rates of return can also be constructed.

  10. Effect of Random Geometric Uncertainty on the Computational Design of a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, C. R.; Newman, P. A.; Hou, G. J.-W.

    2002-01-01

    The effect of geometric uncertainty due to statistically independent, random, normally distributed shape parameters is demonstrated in the computational design of a 3-D flexible wing. A first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic / finite element structural codes for both analysis and sensitivity analysis. First-order sensitivity derivatives obtained by automatic differentiation are used in the input uncertainty propagation. These propagated uncertainties are then used to perform a robust design of a simple 3-D flexible wing at supercritical flow conditions. The effect of the random input uncertainties is shown by comparison with conventional deterministic design results. Sample results are shown for wing planform, airfoil section, and structural sizing variables.

  11. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  12. Modeling decision uncertainties in total situation awareness using cloud computation theory

    NASA Astrophysics Data System (ADS)

    Zein-Sabatto, Saleh; Khoshnaw, Abdulqadir; Shetty, Sachin; Malkani, Mohan; Mitra, Atindra

    2011-06-01

    Uncertainty plays decisive role in the confidence of the decisions made about events. For example, in situation awareness, decision-making is faced with two types of uncertainties; information uncertainty and data uncertainty. Data uncertainty exists due to noise in sensor measurements and is classified as randomness. Information uncertainty is due to ambiguity of using (words) to describe events. This uncertainty is known as fuzziness. Typically, these two types of uncertainties are handled separately using two different theories. Randomness is modeled by probability theory, while fuzzy-logic is used to address fuzziness. In this paper we used the Cloud computation theory to treat data randomness and information fuzziness in one single model. First, we described the Cloud theory then used the theory to generate one and two-dimensional Cloud models. Second, we used the Cloud models to capture and process data randomness and fuzziness in information relative to decision-making in situation awareness. Finally, we applied the models to generate security decisions for security monitoring of sensitive area. Testing results are reported at the end of the paper.

  13. Computed tomography and patient risk: Facts, perceptions and uncertainties.

    PubMed

    Power, Stephen P; Moloney, Fiachra; Twomey, Maria; James, Karl; O'Connor, Owen J; Maher, Michael M

    2016-12-28

    Since its introduction in the 1970s, computed tomography (CT) has revolutionized diagnostic decision-making. One of the major concerns associated with the widespread use of CT is the associated increased radiation exposure incurred by patients. The link between ionizing radiation and the subsequent development of neoplasia has been largely based on extrapolating data from studies of survivors of the atomic bombs dropped in Japan in 1945 and on assessments of the increased relative risk of neoplasia in those occupationally exposed to radiation within the nuclear industry. However, the association between exposure to low-dose radiation from diagnostic imaging examinations and oncogenesis remains unclear. With improved technology, significant advances have already been achieved with regards to radiation dose reduction. There are several dose optimization strategies available that may be readily employed including omitting unnecessary images at the ends of acquired series, minimizing the number of phases acquired, and the use of automated exposure control as opposed to fixed tube current techniques. In addition, new image reconstruction techniques that reduce radiation dose have been developed in recent years with promising results. These techniques use iterative reconstruction algorithms to attain diagnostic quality images with reduced image noise at lower radiation doses.

  14. Computed tomography and patient risk: Facts, perceptions and uncertainties

    PubMed Central

    Power, Stephen P; Moloney, Fiachra; Twomey, Maria; James, Karl; O’Connor, Owen J; Maher, Michael M

    2016-01-01

    Since its introduction in the 1970s, computed tomography (CT) has revolutionized diagnostic decision-making. One of the major concerns associated with the widespread use of CT is the associated increased radiation exposure incurred by patients. The link between ionizing radiation and the subsequent development of neoplasia has been largely based on extrapolating data from studies of survivors of the atomic bombs dropped in Japan in 1945 and on assessments of the increased relative risk of neoplasia in those occupationally exposed to radiation within the nuclear industry. However, the association between exposure to low-dose radiation from diagnostic imaging examinations and oncogenesis remains unclear. With improved technology, significant advances have already been achieved with regards to radiation dose reduction. There are several dose optimization strategies available that may be readily employed including omitting unnecessary images at the ends of acquired series, minimizing the number of phases acquired, and the use of automated exposure control as opposed to fixed tube current techniques. In addition, new image reconstruction techniques that reduce radiation dose have been developed in recent years with promising results. These techniques use iterative reconstruction algorithms to attain diagnostic quality images with reduced image noise at lower radiation doses. PMID:28070242

  15. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  16. Computing uncertainties in ionosphere-airglow models: II. The Martian airglow

    NASA Astrophysics Data System (ADS)

    Gronoff, Guillaume; Simon Wedlund, Cyril; Mertens, Christopher J.; Barthélemy, Mathieu; Lillis, Robert J.; Witasse, Olivier

    2012-05-01

    One of the objectives of spectrometers onboard space missions is to retrieve atmospheric parameters (notably density, composition and temperature). To fulfill this objective, comparisons between observations and model results are necessary. Knowledge of these model uncertainties is therefore necessary, although usually not considered, to estimate the accuracy in planetary upper atmosphere remote sensing of these parameters. In Part I of this study, “Computing uncertainties in ionosphere-airglow models: I. Electron flux and species production uncertainties for Mars” (Gronoff et al., 2012), we presented the uncertainties in the production of excited states and ionized species from photon and electron impacts, computed with a Monte-Carlo approach, and we applied this technique to the Martian upper atmosphere. In the present paper, we present the results of propagation of these production errors to the main UV emissions and the study of other sources of uncertainties. As an example, we studied several aspects of the model uncertainties in the thermosphere of Mars, and especially the O(1S) green line (557.7 nm, with its equivalent, the trans-auroral line at 297.2 nm), the Cameron bands CO(a3Π), and CO2+(B2Σu+) doublet emissions. We first show that the excited species at the origin of these emissions are mainly produced by electron and photon impact. We demonstrate that it is possible to reduce the computation time by decoupling the different sources of uncertainties; moreover, we show that emission uncertainties can be large (>30%) because of the strong sensitivity to the production uncertainties. Our study demonstrates that uncertainty calculations are a crucial step prior to performing remote sensing in the atmosphere of Mars and the other planets and can be used as a guide to subsequent adjustments of cross sections based on aeronomical observations. Finally, we compare the simulations with observations from the SPICAM spectrometer on the Mars Express

  17. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  18. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    DOE PAGES

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less

  19. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    SciTech Connect

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty in the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.

  20. Modeling uncertainty in classification design of a computer-aided detection system

    NASA Astrophysics Data System (ADS)

    Hosseini, Rahil; Dehmeshki, Jamshid; Barman, Sarah; Mazinani, Mahdi; Qanadli, Salah

    2010-03-01

    A computerized image analysis technology suffers from imperfection, imprecision and vagueness of the input data and its propagation in all individual components of the technology including image enhancement, segmentation and pattern recognition. Furthermore, a Computerized Medical Image Analysis System (CMIAS) such as computer aided detection (CAD) technology deals with another source of uncertainty that is inherent in image-based practice of medicine. While there are several technology-oriented studies reported in developing CAD applications, no attempt has been made to address, model and integrate these types of uncertainty in the design of the system components, even though uncertainty issues directly affect the performance and its accuracy. In this paper, the main uncertainty paradigms associated with CAD technologies are addressed. The influence of the vagueness and imprecision in the classification of the CAD, as a second reader, on the validity of ROC analysis results is defined. In order to tackle the problem of uncertainty in the classification design of the CAD, two fuzzy methods are applied and evaluated for a lung nodule CAD application. Type-1 fuzzy logic system (T1FLS) and an extension of it, interval type-2 fuzzy logic system (IT2FLS) are employed as methods with high potential for managing uncertainty issues. The novelty of the proposed classification methods is to address and handle all sources of uncertainty associated with a CAD system. The results reveal that IT2FLS is superior to T1FLS for tackling all sources of uncertainty and significantly, the problem of inter and intra operator observer variability.

  1. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  2. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    NASA Astrophysics Data System (ADS)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  3. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework.

    PubMed

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-14

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  4. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation.

    PubMed

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.

  5. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation

    PubMed Central

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840

  6. Uncertainty and variability in computational and mathematical models of cardiac physiology

    PubMed Central

    Mirams, Gary R.; Pathmanathan, Pras; Gray, Richard A.; Challenor, Peter

    2016-01-01

    Key points Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome.We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge.The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools.We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome.We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. Abstract The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient‐specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety‐critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and

  7. Biased pain reports through vicarious information: A computational approach to investigate the role of uncertainty.

    PubMed

    Zaman, J; Vanpaemel, W; Aelbrecht, C; Tuerlinckx, F; Vlaeyen, J W S

    2017-08-18

    Expectations about an impeding pain stimulus strongly shape its perception, yet the degree that uncertainty might affect perception is far less understood. To explore the influence of uncertainty on pain ratings, we performed a close replication of the study of Yoshida, Seymour, Koltzenburg, and Dolan (2013), who manipulated vicarious information about upcoming heat pain and found evidence for uncertainty-induced hyperalgesia. In our study, we presented eight fictitious ratings of previous participants prior the delivery of electrocutaneous pain. The vicarious information was either biased to over- or underreport pain levels based on the participant's psychometric function. We induced uncertainty by manipulating the variation of the vicarious information. As in Yoshida et al. (2013), four computational models were formulated, such that each model represented a different way of how the pain ratings might have been generated by the physical stimulus and the vicarious information. The four competing models were tested against the data of each participant separately. Using a formal model selection criterion, the best model was selected and interpreted. Contrary to the original study, the preferred model for the majority of participants suggested that pain ratings were biased towards the average vicarious information, ignoring the degree of uncertainty. Possible reasons for these diverging results are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A Novel Method for the Evaluation of Uncertainty in Dose-Volume Histogram Computation

    SciTech Connect

    Henriquez, Francisco Cutanda M.Sc. Castrillon, Silvia Vargas

    2008-03-15

    Purpose: Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. Methods and Materials: To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. Results: This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Conclusions: Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.

  9. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  10. PUQ: A code for non-intrusive uncertainty propagation in computer simulations

    NASA Astrophysics Data System (ADS)

    Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro

    2015-09-01

    We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.

  11. Prediction and Uncertainty in Computational Modeling of Complex Phenomena: A Whitepaper

    SciTech Connect

    Trucano, T.G.

    1999-01-20

    This report summarizes some challenges associated with the use of computational science to predict the behavior of complex phenomena. As such, the document is a compendium of ideas that have been generated by various staff at Sandia. The report emphasizes key components of the use of computational to predict complex phenomena, including computational complexity and correctness of implementations, the nature of the comparison with data, the importance of uncertainty quantification in comprehending what the prediction is telling us, and the role of risk in making and using computational predictions. Both broad and more narrowly focused technical recommendations for research are given. Several computational problems are summarized that help to illustrate the issues we have emphasized. The tone of the report is informal, with virtually no mathematics. However, we have attempted to provide a useful bibliography that would assist the interested reader in pursuing the content of this report in greater depth.

  12. Flood risk assessment at the regional scale: Computational challenges and the monster of uncertainty

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Papalexiou, Simon-Michael; Markonis, Yiannis; Koukouvinos, Antonis; Vasiliades, Lampros; Papaioannou, George; Loukas, Athanasios

    2016-04-01

    We present a methodological framework for flood risk assessment at the regional scale, developed within the implementation of the EU Directive 2007/60 in Greece. This comprises three phases: (a) statistical analysis of extreme rainfall data, resulting to spatially-distributed parameters of intensity-duration-frequency (IDF) relationships and their confidence intervals, (b) hydrological simulations, using event-based semi-distributed rainfall-runoff approaches, and (c) hydraulic simulations, employing the propagation of flood hydrographs across the river network and the mapping of inundated areas. The flood risk assessment procedure is employed over the River Basin District of Thessaly, Greece, which requires schematization and modelling of hundreds of sub-catchments, each one examined for several risk scenarios. This is a challenging task, involving multiple computational issues to handle, such as the organization, control and processing of huge amount of hydrometeorological and geographical data, the configuration of model inputs and outputs, and the co-operation of several software tools. In this context, we have developed supporting applications allowing massive data processing and effective model coupling, thus drastically reducing the need for manual interventions and, consequently, the time of the study. Within flood risk computations we also account for three major sources of uncertainty, in an attempt to provide upper and lower confidence bounds of flood maps, i.e. (a) statistical uncertainty of IDF curves, (b) structural uncertainty of hydrological models, due to varying anteceded soil moisture conditions, and (c) parameter uncertainty of hydraulic models, with emphasis to roughness coefficients. Our investigations indicate that the combined effect of the above uncertainties (which are certainly not the unique ones) result to extremely large bounds of potential inundation, thus rising many questions about the interpretation and usefulness of current flood

  13. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    SciTech Connect

    Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  14. Uncertainty Management in Seismic Vulnerability Assessment Using Granular Computing Based on Covering of Universe

    NASA Astrophysics Data System (ADS)

    Khamespanah, F.; Delavar, M. R.; Zare, M.

    2013-05-01

    Earthquake is an abrupt displacement of the earth's crust caused by the discharge of strain collected along faults or by volcanic eruptions. Earthquake as a recurring natural cataclysm has always been a matter of concern in Tehran, capital of Iran, as a laying city on a number of known and unknown faults. Earthquakes can cause severe physical, psychological and financial damages. Consequently, some procedures should be developed to assist modelling the potential casualties and its spatial uncertainty. One of these procedures is production of seismic vulnerability maps to take preventive measures to mitigate corporeal and financial losses of future earthquakes. Since vulnerability assessment is a multi-criteria decision making problem depending on some parameters and expert's judgments, it undoubtedly is characterized by intrinsic uncertainties. In this study, it is attempted to use Granular computing (GrC) model based on covering of universe to handle the spatial uncertainty. Granular computing model concentrates on a general theory and methodology for problem solving as well as information processing by assuming multiple levels of granularity. Basic elements in granular computing are subsets, classes, and clusters of a universe called elements. In this research GrC is used for extracting classification rules based on seismic vulnerability with minimum entropy to handle uncertainty related to earthquake data. Tehran was selected as the study area. In our previous research, Granular computing model based on a partition model of universe was employed. The model has some kinds of limitations in defining similarity between elements of the universe and defining granules. In the model similarity between elements is defined based on an equivalence relation. According to this relation, two objects are similar based on some attributes, provided for each attribute the values of these objects are equal. In this research a general relation for defining similarity between

  15. Computation of uncertainty for atmospheric emission projections from key pollutant sources in Spain

    NASA Astrophysics Data System (ADS)

    Lumbreras, Julio; García-Martos, Carolina; Mira, José; Borge, Rafael

    Emission projections are important for environmental policy, both to evaluate the effectiveness of abatement strategies and to determine legislation compliance in the future. Moreover, including uncertainty is an essential added value for decision makers. In this work, projection values and their associated uncertainty are computed for pollutant emissions corresponding to the most significant activities from the national atmospheric emission inventory in Spain. Till now, projections had been calculated under three main scenarios: "without measures" (WoM), "with measures" (WM) and "with additional measures" (WAM). For the first one, regression techniques had been applied, which are inadequate for time-dependent data. For the other scenarios, values had been computed taking into account expected activity growth, as well as policies and measures. However, only point forecasts had been computed. In this work statistical methodology has been applied for: a) Inclusion of projection intervals for future time points, where the width of the intervals is a measure of uncertainty. b) For the WoM scenario, ARIMA models are applied to model the dynamics of the processes. c) In the WM scenario, bootstrap is applied as an additional non-parametric tool, which does not rely on distributional assumptions and is thus more general. The advantages of using ARIMA models for the WoM scenario including uncertainty are shown. Moreover, presenting the WM scenario allows observing if projected emission values fall within the intervals, thus showing if the measures to be taken to reach the scenario imply a significant improvement. Results also show how bootstrap techniques incorporate stochastic modelling to produce forecast intervals for the WM scenario.

  16. The effects of geometric uncertainties on computational modelling of knee biomechanics.

    PubMed

    Meng, Qingen; Fisher, John; Wilcox, Ruth

    2017-08-01

    The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.

  17. Anthropometric approaches and their uncertainties to assigning computational phantoms to individual patients in pediatric dosimetry studies

    NASA Astrophysics Data System (ADS)

    Whalen, Scott; Lee, Choonsik; Williams, Jonathan L.; Bolch, Wesley E.

    2008-01-01

    Current efforts to reconstruct organ doses in children undergoing diagnostic imaging or therapeutic interventions using ionizing radiation typically rely upon the use of reference anthropomorphic computational phantoms coupled to Monte Carlo radiation transport codes. These phantoms are generally matched to individual patients based upon nearest age or sometimes total body mass. In this study, we explore alternative methods of phantom-to-patient matching with the goal of identifying those methods which yield the lowest residual errors in internal organ volumes. Various thoracic and abdominal organs were segmented and organ volumes obtained from chest-abdominal-pelvic (CAP) computed tomography (CT) image sets from 38 pediatric patients ranging in age from 2 months to 15 years. The organs segmented included the skeleton, heart, kidneys, liver, lungs and spleen. For each organ, least-squared regression lines, 95th percentile confidence intervals and 95th percentile prediction intervals were established as a function of patient age, trunk volume, estimated trunk mass, trunk height, and three estimates of the ventral body cavity volume based on trunk height alone, or in combination with circumferential, width and/or breadth measurements in the mid-chest of the patient. When matching phantom to patient based upon age, residual uncertainties in organ volumes ranged from 53% (lungs) to 33% (kidneys), and when trunk mass was used (surrogate for total body mass as we did not have images of patient head, arms or legs), these uncertainties ranged from 56% (spleen) to 32% (liver). When trunk height is used as the matching parameter, residual uncertainties in organ volumes were reduced to between 21 and 29% for all organs except the spleen (40%). In the case of the lungs and skeleton, the two-fold reduction in organ volume uncertainties was seen in moving from patient age to trunk height—a parameter easily measured in the clinic. When ventral body cavity volumes were used

  18. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  19. Uncertainty quantification through the Monte Carlo method in a cloud computing setting

    NASA Astrophysics Data System (ADS)

    Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hélio; Breitman, Karin

    2014-05-01

    The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.

  20. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  1. Identification of intestinal wall abnormalities and ischemia by modeling spatial uncertainty in computed tomography imaging findings.

    PubMed

    Tsunoyama, Taichiro; Pham, Tuan D; Fujita, Takashi; Sakamoto, Tetsuya

    2014-10-01

    Intestinal abnormalities and ischemia are medical conditions in which inflammation and injury of the intestine are caused by inadequate blood supply. Acute ischemia of the small bowel can be life-threatening. Computed tomography (CT) is currently a gold standard for the diagnosis of acute intestinal ischemia in the emergency department. However, the assessment of the diagnostic performance of CT findings in the detection of intestinal abnormalities and ischemia has been a difficult task for both radiologists and surgeons. Little effort has been found in developing computerized systems for the automated identification of these types of complex gastrointestinal disorders. In this paper, a geostatistical mapping of spatial uncertainty in CT scans is introduced for medical image feature extraction, which can be effectively applied for diagnostic detection of intestinal abnormalities and ischemia from control patterns. Experimental results obtained from the analysis of clinical data suggest the usefulness of the proposed uncertainty mapping model.

  2. On correlated sources of uncertainty in four dimensional computed tomography data sets.

    PubMed

    Ehler, Eric D; Tome, Wolfgang A

    2010-06-01

    The purpose of this work is to estimate the degree of uncertainty inherent to a given four dimensional computed tomography (4D-CT) imaging modality and to test for interaction of the investigated factors (i.e., object displacement, velocity, and the period of motion) when determining the object motion coordinates, motion envelope, and the confomality in which it can be defined within a time based data series. A motion phantom consisting of four glass spheres imbedded in low density foam on a one dimensional moving platform was used to investigate the interaction of uncertainty factors in motion trajectory that could be used in comparison of trajectory definition, motion envelope definition and conformality in an optimal 4D-CT imaging environment. The motion platform allowed for a highly defined motion trajectory that could be as the ground truth in the comparison with observed motion in 4D-CT data sets. 4D-CT data sets were acquired for 9 different motion patterns. Multifactor analysis of variance (ANOVA) was performed where the factors considered were the phantom maximum velocity, object volume, and the image intensity used to delineate the high density objects. No statistical significance was found for three factor interaction for definition of the motion trajectory, motion envelope, or Dice Similarity Coefficient (DSC) conformality. Two factor interactions were found to be statistically significant for the DSC for the interactions of 1) object volume and the HU threshold used for delineation and 2) the object velocity and object volume. Moreover, a statistically significant single factor direct proportionality was observed between the maximum velocity and the mean tracking error. In this work multiple factors impacting on the uncertainty in 4D data sets have been considered and some statistically significant two-factor interactions have been identified. Therefore, the detailed evaluation of errors and uncertainties in 4D imaging modalities is recommended in

  3. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  4. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  5. Uncertainty studies of real anode surface area in computational analysis for molten salt electrorefining

    NASA Astrophysics Data System (ADS)

    Choi, Sungyeol; Park, Jaeyeong; Hoover, Robert O.; Phongikaroon, Supathorn; Simpson, Michael F.; Kim, Kwang-Rag; Hwang, Il Soon

    2011-09-01

    This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 h of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

  6. Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining

    SciTech Connect

    Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

    2011-09-01

    This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

  7. Measuring, using, and reducing experimental and computational uncertainty in reliability analysis of composite laminates

    NASA Astrophysics Data System (ADS)

    Smarslok, Benjamin P.

    The failure of the composite hydrogen tanks on the X-33 Reusable Launch Vehicle (RLV) from combined thermal and mechanical failure modes created a situation where the design weight was highly sensitive to uncertainties. Through previous research of sensitivity and reliability analysis on this problem, three areas of potential uncertainty reduction were recognized and became the focal points for this dissertation. The transverse elastic modulus and coefficient of thermal expansion were cited as being particularly sensitive input parameters with respect to weight. Measurement uncertainty analysis was performed on transverse modulus experiments, where the intermediate thickness measurements proved to be the greatest contributor to uncertainty. Data regarding correlations in the material properties of composite laminates is not always available, however the significance of correlated properties on probability of failure was detected. Therefore, a model was developed for correlations in composite properties based on micromechanics, specifically fiber volume fraction. The correlations from fiber volume fraction were combined with experimental data to give an estimate of the complete uncertainty, including material variability and measurement error. The probability of failure was compared for correlated material properties and independent random variables in an example pressure vessel problem. Including the correlations had a significant effect on the failure probability, however being unsafe or inefficient can depend on the material system. Reliability-based design simulations often use the traditional, crude Monte Carlo method as a sampling procedure for predicting failure. The combination of designing for very small failure probabilities and (˜10-8 - 10-6) and using computational expensive finite element models, makes traditional Monte Carlo very costly. The separable Monte Carlo method, which is an extension of conditional expectation, takes advantage of statistical

  8. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  9. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  10. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    NASA Astrophysics Data System (ADS)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  11. Probabilistic approaches to compute uncertainty intervals and sensitivity factors of ultrasonic simulations of a weld inspection.

    PubMed

    Rupin, F; Blatman, G; Lacaze, S; Fouquet, T; Chassignole, B

    2014-04-01

    For comprehension purpose, numerical computations are more and more used to simulate the propagation phenomena observed during experimental inspections. However, the good agreement between experimental and simulated data necessitates the use of accurate input data and thus a good characterization of the inspected material. Generally the input data are provided by experimental measurements and are consequently tainted with uncertainties. Thus, it becomes necessary to evaluate the impact of these uncertainties on the outputs of the numerical model. The aim of this study is to perform a probabilistic analysis of an ultrasonic inspection of an austenitic weld containing a manufactured defect based on advanced techniques such as polynomial chaos expansions and computation of sensitivity factors (Sobol, DGSM). The simulation of this configuration with the finite element code ATHENA2D was performed 6000times with variations of the input parameters (the columnar grain orientation and the elastic constants of the material). The 6000 sets of input parameters were obtained from adapted statistical laws. The output parameters (the amplitude and the position of the defect echo) distributions were then analyzed and the 95% confidence intervals were determined. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    SciTech Connect

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  13. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  14. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    NASA Astrophysics Data System (ADS)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  15. "I Guess My Question Is": What Is the Co-Occurrence of Uncertainty and Learning in Computer-Mediated Discourse?

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Cheng, An-Chih Janne; Schallert, Diane; Song, Kwangok; Lee, SoonAh; Park, Yangjoo

    2014-01-01

    The purpose of this study was to contribute to a better understanding of learning in computer-supported collaborative learning (CSCL) environments by investigating the co-occurrence of uncertainty expressions and expressions of learning in a graduate course in which students collaborated in classroom computer-mediated discussions. Results showed…

  16. Experimental evaluation of the uncertainty associated with the result of feature-of-size measurements through computed tomography

    NASA Astrophysics Data System (ADS)

    Fernandes, T. L.; Donatelli, G. D.; Baldo, C. R.

    2016-07-01

    Computed tomography for dimensional metrology has been introduced in quality control loop for about a decade. Due to the complex measurement-error cause system, generally no consistent measurement uncertainty reporting has been made. The ISO 15530-3 experimental approach, which makes use of calibrated parts, has been tested for estimating the uncertainty of CT-based measurements of features of size of a test object made of POM. Particular attention is given to the design of experiment and to the measurement uncertainty components. The most significant experimental findings are outlined and discussed in this paper.

  17. Differential effects of reward and punishment in decision making under uncertainty: a computational study.

    PubMed

    Duffin, Elaine; Bland, Amy R; Schaefer, Alexandre; de Kamps, Marc

    2014-01-01

    Computational models of learning have proved largely successful in characterizing potential mechanisms which allow humans to make decisions in uncertain and volatile contexts. We report here findings that extend existing knowledge and show that a modified reinforcement learning model, which has separate parameters according to whether the previous trial gave a reward or a punishment, can provide the best fit to human behavior in decision making under uncertainty. More specifically, we examined the fit of our modified reinforcement learning model to human behavioral data in a probabilistic two-alternative decision making task with rule reversals. Our results demonstrate that this model predicted human behavior better than a series of other models based on reinforcement learning or Bayesian reasoning. Unlike the Bayesian models, our modified reinforcement learning model does not include any representation of rule switches. When our task is considered purely as a machine learning task, to gain as many rewards as possible without trying to describe human behavior, the performance of modified reinforcement learning and Bayesian methods is similar. Others have used various computational models to describe human behavior in similar tasks, however, we are not aware of any who have compared Bayesian reasoning with reinforcement learning modified to differentiate rewards and punishments.

  18. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  19. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-08

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  20. Differential effects of reward and punishment in decision making under uncertainty: a computational study

    PubMed Central

    Duffin, Elaine; Bland, Amy R.; Schaefer, Alexandre; de Kamps, Marc

    2014-01-01

    Computational models of learning have proved largely successful in characterizing potential mechanisms which allow humans to make decisions in uncertain and volatile contexts. We report here findings that extend existing knowledge and show that a modified reinforcement learning model, which has separate parameters according to whether the previous trial gave a reward or a punishment, can provide the best fit to human behavior in decision making under uncertainty. More specifically, we examined the fit of our modified reinforcement learning model to human behavioral data in a probabilistic two-alternative decision making task with rule reversals. Our results demonstrate that this model predicted human behavior better than a series of other models based on reinforcement learning or Bayesian reasoning. Unlike the Bayesian models, our modified reinforcement learning model does not include any representation of rule switches. When our task is considered purely as a machine learning task, to gain as many rewards as possible without trying to describe human behavior, the performance of modified reinforcement learning and Bayesian methods is similar. Others have used various computational models to describe human behavior in similar tasks, however, we are not aware of any who have compared Bayesian reasoning with reinforcement learning modified to differentiate rewards and punishments. PMID:24600342

  1. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  2. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  3. Dose computation in conformal radiation therapy including geometric uncertainties: Methods and clinical implications

    NASA Astrophysics Data System (ADS)

    Rosu, Mihaela

    The aim of any radiotherapy is to tailor the tumoricidal radiation dose to the target volume and to deliver as little radiation dose as possible to all other normal tissues. However, the motion and deformation induced in human tissue by ventilatory motion is a major issue, as standard practice usually uses only one computed tomography (CT) scan (and hence one instance of the patient's anatomy) for treatment planning. The interfraction movement that occurs due to physiological processes over time scales shorter than the delivery of one treatment fraction leads to differences between the planned and delivered dose distributions. Due to the influence of these differences on tumors and normal tissues, the tumor control probabilities and normal tissue complication probabilities are likely to be impacted upon in the face of organ motion. In this thesis we apply several methods to compute dose distributions that include the effects of the treatment geometric uncertainties by using the time-varying anatomical information as an alternative to the conventional Planning Target Volume (PTV) approach. The proposed methods depend on the model used to describe the patient's anatomy. The dose and fluence convolution approaches for rigid organ motion are discussed first, with application to liver tumors and the rigid component of the lung tumor movements. For non-rigid behavior a dose reconstruction method that allows the accumulation of the dose to the deforming anatomy is introduced, and applied for lung tumor treatments. Furthermore, we apply the cumulative dose approach to investigate how much information regarding the deforming patient anatomy is needed at the time of treatment planning for tumors located in thorax. The results are evaluated from a clinical perspective. All dose calculations are performed using a Monte Carlo based algorithm to ensure more realistic and more accurate handling of tissue heterogeneities---of particular importance in lung cancer treatment planning.

  4. Expressing Uncertainty in Computer-Mediated Discourse: Language as a Marker of Intellectual Work

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Schallert, Diane L.; Park, Yangjoo; Lee, SoonAh; Chiang, Yueh-hui Vanessa; Cheng, An-Chih Janne; Song, Kwangok; Chu, Hsiang-Ning Rebecca; Kim, Taehee; Lee, Haekyung

    2012-01-01

    Learning and dialogue may naturally engender feelings and expressions of uncertainty for a variety of reasons and purposes. Yet, little research has examined how patterns of linguistic uncertainty are enacted and changed over time as students reciprocally influence one another and the dialogical system they are creating. This study describes the…

  5. Quantifying the Uncertainty in a Computational Fluid Dynamics Turbulent Twin Jet Model

    NASA Astrophysics Data System (ADS)

    Lawrence, Seth Sheldon

    Ubiquitous application of CFD motivates the need to verify and validate CFD models and quantify uncertainty in the results. The objective of this research was to investigate the uncertainty interval over which an ANSYS Fluent CFD model predicted the axial velocity in a turbulent twin jet flow regime with 95% confidence. The modeling domain was composed of water, injected through a nozzle and into a static holding tank. This configuration was described by the American Society of Mechanical Engineers (ASME), Nuclear System Thermal Fluids Behavior (V&V30) Standard Committee as a twin jet benchmark verification and validation problem. The steady Reynolds Average Navier-Stokes (RANS) approach utilizing a realizable k-ε turbulence model was chosen. The system response quantity under consideration was the axial velocity of the flowfield in both the pre- and post-combing regions of the twin jet flow. The model input uncertainty in the jet width and spacing was treated as epistemic, with aleatory uncertainties in the mass-flow-rate and turbulence inputs for each jet. Numerical uncertainty was considered at the discretization level, through grid refinement and the Grid Convergence Index (GCI) method. Validation uncertainty was achieved through a validation procedure using experimental data. The uncertainties in the model inputs, numerics, and validation, were combined to quantify the total uncertainty in the Fluent CFD model. The results indicated that numerical uncertainty was the dominate factor in the region near the jet nozzle, located before the two jets merge together. Moving further away from the nozzle, to the region where the jets merge to form a single jet, the numerical uncertainty was reduced significantly. In this region, differences between the model and experiment resulted in a dominant validation uncertainty. This uncertainty was observed as the consistent under prediction of axial velocity in the combined flow region. The final results offered a prediction

  6. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  7. A methodology for computing uncertainty bounds of multivariable systems based on sector stability theory concepts

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1992-01-01

    The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.

  8. Computing continuous record of discharge with quantified uncertainty using index velocity observations: A probabilistic machine learning approach

    NASA Astrophysics Data System (ADS)

    Farahmand, Touraj; Hamilton, Stuart

    2016-04-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs). In general, the index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate and recommended when more than one specific discharge can be measured for a specific stage such as backwater and unsteady flow conditions caused by but not limited to the following; stream confluences, streams flowing into lakes or reservoirs, tide-affected streams, regulated streamflows (dams or control structures), or streams affected by meteorological forcing, such as strong prevailing winds. In existing index velocity modeling techniques, two models (ratings) are required; index velocity model and stage-area model. The outputs from each of these models, mean channel velocity (Vm) and cross-sectional area (A), are then multiplied together to compute a discharge. Mean channel velocity (Vm) can generally be determined by a multivariate regression parametric model such as linear regression in the simplest case. The main challenges in the existing index velocity modeling techniques are; 1) Preprocessing and QA/QC of continuous index velocity data and synchronizing them with discharge measurements. 2) Nonlinear relationship between mean velocity and index velocity which is not uncommon at monitoring locations. 3)Model exploration and analysis in order to find the optimal regression model predictor(s) and model type (linear vs nonlinear and if nonlinear number of the parameters). 3) Model changes caused by dynamical changes in the environment (geomorphic, biological) over time 5) Deployment of the final model into the Data Management Systems (DMS) for real-time discharge calculation 6) Objective estimation of uncertainty caused by: field measurement errors; structural uncertainty; parameter uncertainty; and continuous sensor data

  9. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  10. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  11. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  12. Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts

    DTIC Science & Technology

    2015-07-01

    in different ways (Madsen 2000; Boyle et al. 2000; Doherty and Johnston 2003). In some cases, the data is transformed before fitting to ERDC/CHL...used to obtain probability distributions of an unknown variable ( Robert and Casella 2004). MCS is a commonly used method for uncertainty propagation...Hydrological Processes 6(3):279–298. Boyle , D. P., H. V. Gupta, and S. Sorooshian. 2000. Toward improved calibration of hydrologic models

  13. Real-time, mixed-mode computing architecture for waveform-resolved lidar systems with total propagated uncertainty

    NASA Astrophysics Data System (ADS)

    Ortman, Robert L.; Carr, Domenic A.; James, Ryan; Long, Daniel; O'Shaughnessy, Matthew R.; Valenta, Christopher R.; Tuell, Grady H.

    2016-05-01

    We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a "mixed-mode" architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer's user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.

  14. Mathematical and Computational Tools for Predictive Simulation of Complex Coupled Systems under Uncertainty

    SciTech Connect

    Ghanem, Roger

    2013-03-25

    Methods and algorithms are developed to enable the accurate analysis of problems that exhibit interacting physical processes with uncertainties. These uncertainties can pertain either to each of the physical processes or to the manner in which they depend on each others. These problems are cast within a polynomial chaos framework and their solution then involves either solving a large system of algebraic equations or a high dimensional numerical quadrature. In both cases, the curse of dimensionality is manifested. Procedures are developed for the efficient evaluation of the resulting linear equations that advantage of the block sparse structure of these equations, resulting in a block recursive Schur complement construction. In addition, embedded quadratures are constructed that permit the evaluation of very high-dimensional integrals using low-dimensional quadratures adapted to particular quantities of interest. The low-dimensional integration is carried out in a transformed measure space in which the quantity of interest is low-dimensional. Finally, a procedure is also developed to discover a low-dimensional manifold, embedded in the initial high-dimensional one, in which scalar quantities of interest exist. This approach permits the functional expression of the reduced space in terms of the original space, thus permitting cross-scale sensitivity analysis.

  15. Development of a Computational Framework for Stochastic Co-optimization of Water and Energy Resource Allocations under Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Mahinthakumar, K.; Arumugam, S.; DeCarolis, J.

    2015-12-01

    Owing to the lack of a consistent approach to assimilate probabilistic forecasts for water and energy systems, utilization of climate forecasts for conjunctive management of these two systems is very limited. Prognostic management of these two systems presents a stochastic co-optimization problem that seeks to determine reservoir releases and power allocation strategies while minimizing the expected operational costs subject to probabilistic climate forecast constraints. To address these issues, we propose a high performance computing (HPC) enabled computational framework for stochastic co-optimization of water and energy resource allocations under climate uncertainty. The computational framework embodies a new paradigm shift in which attributes of climate (e.g., precipitation, temperature) and its forecasted probability distribution are employed conjointly to inform seasonal water availability and electricity demand. The HPC enabled cyberinfrastructure framework is developed to perform detailed stochastic analyses, and to better quantify and reduce the uncertainties associated with water and power systems management by utilizing improved hydro-climatic forecasts. In this presentation, our stochastic multi-objective solver extended from Optimus (Optimization Methods for Universal Simulators), is introduced. The solver uses parallel cooperative multi-swarm method to solve for efficient solution of large-scale simulation-optimization problems on parallel supercomputers. The cyberinfrastructure harnesses HPC resources to perform intensive computations using ensemble forecast models of streamflow and power demand. The stochastic multi-objective particle swarm optimizer we developed is used to co-optimize water and power system models under constraints over a large number of ensembles. The framework sheds light on the application of climate forecasts and cyber-innovation framework to improve management and promote the sustainability of water and energy systems.

  16. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational

  17. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    DOE PAGES

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; ...

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less

  18. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    SciTech Connect

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match quality scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.

  19. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  20. Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System

    DTIC Science & Technology

    2017-08-01

    target is tracked as the primary quantity of interest. Overall, computational time serves as a secondary performance metric. Two different simulation...for public release; distribution is unlimited. 3 spatial dimensions. These codes belong to a general class of software known as hydrocodes.7...setting CONVECTION = 0. The Sandia Modified Youngs’ Reconstruction Algorithm was employed for tracking material interfaces. A special fragment

  1. Can megavoltage computed tomography reduce proton range uncertainties in treatment plans for patients with large metal implants?

    NASA Astrophysics Data System (ADS)

    Newhauser, Wayne D.; Giebeler, Annelise; Langen, Katja M.; Mirkovic, Dragan; Mohan, Radhe

    2008-05-01

    Treatment planning calculations for proton therapy require an accurate knowledge of radiological path length, or range, to the distal edge of the target volume. In most cases, the range may be calculated with sufficient accuracy using kilovoltage (kV) computed tomography (CT) images. However, metal implants such as hip prostheses can cause severe streak artifacts that lead to large uncertainties in proton range. The purposes of this study were to quantify streak-related range errors and to determine if they could be avoided by using artifact-free megavoltage (MV) CT images in treatment planning. Proton treatment plans were prepared for a rigid, heterogeneous phantom and for a prostate cancer patient with a metal hip prosthesis using corrected and uncorrected kVCT images alone, uncorrected MVCT images and a combination of registered MVCT and kVCT images (the hybrid approach). Streak-induced range errors of 5-12 mm were present in the uncorrected kVCT-based patient plan. Correcting the streaks by manually assigning estimated true Hounsfield units improved the range accuracy. In a rigid heterogeneous phantom, the implant-related range uncertainty was estimated at <3 mm for both the corrected kVCT-based plan and the uncorrected MVCT-based plan. The hybrid planning approach yielded the best overall result. In this approach, the kVCT images provided good delineation of soft tissues due to high-contrast resolution, and the streak-free MVCT images provided smaller range uncertainties because they did not require artifact correction.

  2. A review of computed tomography and manual dissection for calibration of devices for pig carcass classification - Evaluation of uncertainty.

    PubMed

    Olsen, Eli V; Christensen, Lars Bager; Nielsen, Dennis Brandborg

    2017-01-01

    Online pig carcass classification methods require calibration against a reference standard. More than 30years ago, the first reference standard in the EU was defined as the total amount of lean meat in the carcass obtained by manual dissection. Later, the definition was simplified to include only the most important parts of the carcass to obtain a better balance between accuracy and cost. Recently, computed tomography (CT) obtained using medical X-ray scanners has been proposed as a reference standard. The error sources of both traditional (manual) dissection methods and the new methods based on images from CT scanning of pig carcasses are discussed in this paper. The uncertainty resulting from the effect of various error sources is estimated. We conclude that, without standardisation, the uncertainty is considerable for all the methods. However, methods based on volume estimation using CT and image analysis might lead to higher accuracy if necessary precautions are taken with respect to measuring protocol and reference materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Personalized mitral valve closure computation and uncertainty analysis from 3D echocardiography.

    PubMed

    Grbic, Sasa; Easley, Thomas F; Mansi, Tommaso; Bloodworth, Charles H; Pierce, Eric L; Voigt, Ingmar; Neumann, Dominik; Krebs, Julian; Yuh, David D; Jensen, Morten O; Comaniciu, Dorin; Yoganathan, Ajit P

    2017-01-01

    Intervention planning is essential for successful Mitral Valve (MV) repair procedures. Finite-element models (FEM) of the MV could be used to achieve this goal, but the translation to the clinical domain is challenging. Many input parameters for the FEM models, such as tissue properties, are not known. In addition, only simplified MV geometry models can be extracted from non-invasive modalities such as echocardiography imaging, lacking major anatomical details such as the complex chordae topology. A traditional approach for FEM computation is to use a simplified model (also known as parachute model) of the chordae topology, which connects the papillary muscle tips to the free-edges and select basal points. Building on the existing parachute model a new and comprehensive MV model was developed that utilizes a novel chordae representation capable of approximating regional connectivity. In addition, a fully automated personalization approach was developed for the chordae rest length, removing the need for tedious manual parameter selection. Based on the MV model extracted during mid-diastole (open MV) the MV geometric configuration at peak systole (closed MV) was computed according to the FEM model. In this work the focus was placed on validating MV closure computation. The method is evaluated on ten in vitro ovine cases, where in addition to echocardiography imaging, high-resolution μCT imaging is available for accurate validation.

  4. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a

  5. Numerical study of premixed HCCI engine combustion and its sensitivity to computational mesh and model uncertainties

    NASA Astrophysics Data System (ADS)

    Kong, Song-Charng; Reitz, Rolf D.

    2003-06-01

    This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.

  6. Reducing annotation cost and uncertainty in computer-aided diagnosis through selective iterative classification

    NASA Astrophysics Data System (ADS)

    Riely, Amelia; Sablan, Kyle; Xiaotao, Thomas; Furst, Jacob; Raicu, Daniela

    2015-03-01

    Medical imaging technology has always provided radiologists with the opportunity to view and keep records of anatomy of the patient. With the development of machine learning and intelligent computing, these images can be used to create Computer-Aided Diagnosis (CAD) systems, which can assist radiologists in analyzing image data in various ways to provide better health care to patients. This paper looks at increasing accuracy and reducing cost in creating CAD systems, specifically in predicting the malignancy of lung nodules in the Lung Image Database Consortium (LIDC). Much of the cost in creating an accurate CAD system stems from the need for multiple radiologist diagnoses or annotations of each image, since there is rarely a ground truth diagnosis and even different radiologists' diagnoses of the same nodule often disagree. To resolve this issue, this paper outlines an method of selective iterative classification that predicts lung nodule malignancy by using multiple radiologist diagnoses only for cases that can benefit from them. Our method achieved 81% accuracy while costing only 46% of the method that indiscriminately used all annotations, which achieved a lower accuracy of 70%, while costing more.

  7. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  8. Uncertainty in aspiration efficiency estimates from torso simplifications in computational fluid dynamics simulations.

    PubMed

    Anderson, Kimberly R; Anthony, T Renée

    2013-03-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air's upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s(-1) and breathing velocities at 1.81 and 12.11 m s(-1) to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models.

  9. ESTIMATION OF INTERNAL EXPOSURE TO URANIUM WITH UNCERTAINTY FROM URINALYSIS DATA USING THE InDEP COMPUTER CODE

    PubMed Central

    Anderson, Jeri L.; Apostoaei, A. Iulian; Thomas, Brian A.

    2015-01-01

    The National Institute for Occupational Safety and Health (NIOSH) is currently studying mortality in a cohort of 6409 workers at a former uranium processing facility. As part of this study, over 220 000 urine samples were used to reconstruct organ doses due to internal exposure to uranium. Most of the available computational programs designed for analysis of bioassay data handle a single case at a time, and thus require a significant outlay of time and resources for the exposure assessment of a large cohort. NIOSH is currently supporting the development of a computer program, InDEP (Internal Dose Evaluation Program), to facilitate internal radiation exposure assessment as part of epidemiological studies of both uranium- and plutonium-exposed cohorts. A novel feature of InDEP is its batch processing capability which allows for the evaluation of multiple study subjects simultaneously. InDEP analyses bioassay data and derives intakes and organ doses with uncertainty estimates using least-squares regression techniques or using the Bayes’ Theorem as applied to internal dosimetry (Bayesian method). This paper describes the application of the current version of InDEP to formulate assumptions about the characteristics of exposure at the study facility that were used in a detailed retrospective intake and organ dose assessment of the cohort. PMID:22683620

  10. Uncertainties in radiative transfer computations: consequences on the ocean color products

    NASA Astrophysics Data System (ADS)

    Dilligeard, Eric; Zagolski, Francis; Fischer, Juergen; Santer, Richard P.

    2003-05-01

    Operational MERIS (MEdium Resolution Imaging Spectrometer) level-2 processing uses auxiliary data generated by two radiative transfer tools. These two codes simulate upwelling radiances within a coupled 'Atmosphere-Ocean' system, using different approaches based on the matrix-operator method (MOMO) and the successive orders (SO) technique. Intervalidation of these two radiative transfer codes was performed in order to implement them in the MERIS level-2 processing. MOMO and SO simulations were then conducted on a set of representative test cases. Results stressed both for all test cases good agreements were observed. The scattering processes are retrieved within a few tenths of a percent. Nevertheless, some substantial discrepancies occurred if the polarization is not taken into account mainly in the Rayleigh scattering computations. A preliminary study indicates that the impact of the code inaccuracy in the water leaving radiances retrieval (a level-2 MERIS product) is large, up to 50% in relative difference. Applying the OC2 algorithm, the effect on the retrieval chlorophyll concentration is less than 10%.

  11. The VIMOS VLT deep survey. Computing the two point correlation statistics and associated uncertainties

    NASA Astrophysics Data System (ADS)

    Pollo, A.; Meneux, B.; Guzzo, L.; Le Fèvre, O.; Blaizot, J.; Cappi, A.; Iovino, A.; Marinoni, C.; McCracken, H. J.; Bottini, D.; Garilli, B.; Le Brun, V.; Maccagni, D.; Picat, J. P.; Scaramella, R.; Scodeggio, M.; Tresse, L.; Vettolani, G.; Zanichelli, A.; Adami, C.; Arnaboldi, M.; Arnouts, S.; Bardelli, S.; Bolzonella, M.; Charlot, S.; Ciliegi, P.; Contini, T.; Foucaud, S.; Franzetti, P.; Gavignaud, I.; Ilbert, O.; Marano, B.; Mathez, G.; Mazure, A.; Merighi, R.; Paltani, S.; Pellò, R.; Pozzetti, L.; Radovich, M.; Zamorani, G.; Zucca, E.; Bondi, M.; Bongiorno, A.; Busarello, G.; Gregorini, L.; Lamareille, F.; Mellier, Y.; Merluzzi, P.; Ripepi, V.; Rizzo, D.

    2005-09-01

    We present a detailed description of the methods used to compute the three-dimensional two-point galaxy correlation function in the VIMOS-VLT deep survey (VVDS). We investigate how instrumental selection effects and observational biases affect the measurements and identify the methods to correct for them. We quantify the accuracy of our corrections using an ensemble of 50 mock galaxy surveys generated with the GalICS semi-analytic model of galaxy formation which incorporate the selection biases and tiling strategy of the real data. We demonstrate that we are able to recover the real-space two-point correlation function ξ(s) and the projected correlation function w_p(r_p) to an accuracy better than 10% on scales larger than 1 h-1 Mpc with the sampling strategy used for the first epoch VVDS data. The large number of simulated surveys allows us to provide a reliable estimate of the cosmic variance on the measurements of the correlation length r0 at z ˜ 1, of about 15-20% for the first epoch VVDS observation while any residual systematic effect in the measurements of r0 is always below 5%. The error estimation and measurement techniques outlined in this paper are being used in several parallel studies which investigate in detail the clustering properties of galaxies in the VVDS.

  12. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task.

    PubMed

    Radell, Milen L; Myers, Catherine E; Beck, Kevin D; Moustafa, Ahmed A; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction.

  13. Application of GUM Supplement 1 to uncertainty of Monte Carlo computed efficiency in gamma-ray spectrometry.

    PubMed

    Sima, Octavian; Lépy, Marie-Christine

    2016-03-01

    The uncertainty of quantities relevant in gamma-ray spectrometry (efficiency, transfer factor, self-attenuation FA and coincidence summing FC correction factors) is realistically evaluated by Monte Carlo propagation of the distributions characterizing the parameters on which these quantities depend. Probability density functions are constructed and summarized as recommended in the GUM Supplement 1 and compared with the values obtained using the traditional approach (GUM uncertainty framework). Special cases when this approach encounters difficulties (FC uncertainty due to the uncertainty of decay scheme parameters, effect of activity and matrix inhomogeneity on efficiency) are also discussed.

  14. Computing with Epistemic Uncertainty

    DTIC Science & Technology

    2015-01-01

    Converting the crisp TPDs to epistemic intervals .............................. 5 4.3.3 Converting the fuzzy TPDs to epistemic intervals...distributions, crisp triangular probability distributions (TPD), and fuzzy TPD where the three vertices are given as crisp epistemic intervals. The...4.3.2 Converting the crisp TPDs to epistemic intervals The traditional type of confidence interval referred to above is associated with a given

  15. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task

    PubMed Central

    Radell, Milen L.; Myers, Catherine E.; Beck, Kevin D.; Moustafa, Ahmed A.; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction. PMID:27555829

  16. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  17. Uncertainty and Cognitive Control

    PubMed Central

    Mushtaq, Faisal; Bland, Amy R.; Schaefer, Alexandre

    2011-01-01

    A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the “need for control”; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders. PMID:22007181

  18. Accounting for both random errors and systematic errors in uncertainty propagation analysis of computer models involving experimental measurements with Monte Carlo methods.

    PubMed

    Vasquez, Victor R; Whiting, Wallace B

    2005-12-01

    A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.

  19. Application of an Adaptive Polynomial Chaos Expansion on Computationally Expensive Three-Dimensional Cardiovascular Models for Uncertainty Quantification and Sensitivity Analysis.

    PubMed

    Quicken, Sjeng; Donders, Wouter P; van Disseldorp, Emiel M J; Gashi, Kujtim; Mees, Barend M E; van de Vosse, Frans N; Lopata, Richard G P; Delhaas, Tammo; Huberts, Wouter

    2016-12-01

    When applying models to patient-specific situations, the impact of model input uncertainty on the model output uncertainty has to be assessed. Proper uncertainty quantification (UQ) and sensitivity analysis (SA) techniques are indispensable for this purpose. An efficient approach for UQ and SA is the generalized polynomial chaos expansion (gPCE) method, where model response is expanded into a finite series of polynomials that depend on the model input (i.e., a meta-model). However, because of the intrinsic high computational cost of three-dimensional (3D) cardiovascular models, performing the number of model evaluations required for the gPCE is often computationally prohibitively expensive. Recently, Blatman and Sudret (2010, "An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis," Probab. Eng. Mech., 25(2), pp. 183-197) introduced the adaptive sparse gPCE (agPCE) in the field of structural engineering. This approach reduces the computational cost with respect to the gPCE, by only including polynomials that significantly increase the meta-model's quality. In this study, we demonstrate the agPCE by applying it to a 3D abdominal aortic aneurysm (AAA) wall mechanics model and a 3D model of flow through an arteriovenous fistula (AVF). The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results. Cost reductions ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.

  20. Uncertainty analysis

    SciTech Connect

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  1. THE USE OF COMPUTER MODELING PACKAGES TO ILLUSTRATE UNCERTAINTY IN RISK ASSESSMENTS: AN EASE OF USE AND INTERPRETATION COMPARISON

    EPA Science Inventory

    Consistent improvements in processor speed and computer access have substantially increased the use of computer modeling by experts and non-experts alike. Several new computer modeling packages operating under graphical operating systems (i.e. Microsoft Windows or Macintosh) m...

  2. THE USE OF COMPUTER MODELING PACKAGES TO ILLUSTRATE UNCERTAINTY IN RISK ASSESSMENTS: AN EASE OF USE AND INTERPRETATION COMPARISON

    EPA Science Inventory

    Consistent improvements in processor speed and computer access have substantially increased the use of computer modeling by experts and non-experts alike. Several new computer modeling packages operating under graphical operating systems (i.e. Microsoft Windows or Macintosh) m...

  3. Uncertainty, neuromodulation, and attention.

    PubMed

    Yu, Angela J; Dayan, Peter

    2005-05-19

    Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain's implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic.

  4. The uncertainty of estimating the thickness of soft sediments with the HVSR method: A computational point of view on weak lateral variations

    NASA Astrophysics Data System (ADS)

    Bignardi, Samuel

    2017-10-01

    The use of the ratio of microtremor spectra, as computed by the Nakamura's technique, was recently proved successful for the evaluating the thickness of sedimentary covers laying over both shallow and deep rocky bedrocks thus enabling bedrock mapping. The experimental success of such application and its experimental uncertainties are today reported in many publications. To map bedrock, two approaches exist. The first is to assume a constant shear wave velocity profile of the sediments. The second, and most preferable, is Ibs-von Seht and Wohlenberg's, based on correlating Nakamura's curves main peak and wells information. In the latter approach, the main sources of uncertainty addressed by authors, despite the lack of formal proof, comprise local deviations of the subsurface from the assumed model. I first discuss the reliability of the simplified constant velocity approach showing its limitations. As a second task, I evaluate the uncertainty of the Ibs-von Seht and Wohlenberg's approach with focus on local subsurface variations. Since the experimental basis is well established, I entirely focus my investigation on numerical simulations to evaluate to what extent local subsurface deviations from the assumed model may affect the outcome of a bedrock mapping survey. Further, the present investigation strategy suggests that modeling and inversion, through the investigation of the parameters space around the reference model, may reveal a very convenient tool when lateral variations are suspected to exist or when the number of available wells is not sufficient to obtain an accurate frequency-depth regression.

  5. Measurement uncertainty.

    PubMed

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  6. Mini-max feedback control as a computational theory of sensorimotor control in the presence of structural uncertainty

    PubMed Central

    Ueyama, Yuki

    2014-01-01

    We propose a mini-max feedback control (MMFC) model as a robust approach to human motor control under conditions of uncertain dynamics, such as structural uncertainty. The MMFC model is an expansion of the optimal feedback control (OFC) model. According to this scheme, motor commands are generated to minimize the maximal cost, based on an assumption of worst-case uncertainty, characterized by familiarity with novel dynamics. We simulated linear dynamic systems with different types of force fields–stable and unstable dynamics–and compared the performance of MMFC to that of OFC. MMFC delivered better performance than OFC in terms of stability and the achievement of tasks. Moreover, the gain in positional feedback with the MMFC model in the unstable dynamics was tuned to the direction of instability. It is assumed that the shape modulations of the gain in positional feedback in unstable dynamics played the same role as that played by end-point stiffness observed in human studies. Accordingly, we suggest that MMFC is a plausible model that predicts motor behavior under conditions of uncertain dynamics. PMID:25309415

  7. Artifacts in Conventional Computed Tomography (CT) and Free Breathing Four-Dimensional CT Induce Uncertainty in Gross Tumor Volume Determination

    SciTech Connect

    Fredberg Persson, Gitte; Nygaard, Ditte Eklund; Munch af Rosenschoeld, Per; Richter Vogelius, Ivan; Josipovic, Mirjana; Specht, Lena; Korreman, Stine Sofia

    2011-08-01

    Purpose: Artifacts impacting the imaged tumor volume can be seen in conventional three-dimensional CT (3DCT) scans for planning of lung cancer radiotherapy but can be reduced with the use of respiration-correlated imaging, i.e., 4DCT or breathhold CT (BHCT) scans. The aim of this study was to compare delineated gross tumor volume (GTV) sizes in 3DCT, 4DCT, and BHCT scans of patients with lung tumors. Methods and Materials: A total of 36 patients with 46 tumors referred for stereotactic radiotherapy of lung tumors were included. All patients underwent positron emission tomography (PET)/CT, 4DCT, and BHCT scans. GTVs in all CT scans of individual patients were delineated during one session by a single physician to minimize systematic delineation uncertainty. The GTV size from the BHCT was considered the closest to true tumor volume and was chosen as the reference. The reference GTV size was compared to GTV sizes in 3DCT, at midventilation (MidV), at end-inspiration (Insp), and at end-expiration (Exp) bins from the 4DCT scan. Results: The median BHCT GTV size was 4.9 cm{sup 3} (0.1-53.3 cm{sup 3}). Median deviation between 3DCT and BHCT GTV size was 0.3 cm{sup 3} (-3.3 to 30.0 cm{sup 3}), between MidV and BHCT size was 0.2 cm{sup 3} (-5.7 to 19.7 cm{sup 3}), between Insp and BHCT size was 0.3 cm{sup 3} (-4.7 to 24.8 cm{sup 3}), and between Exp and BHCT size was 0.3 cm{sup 3} (-4.8 to 25.5 cm{sup 3}). The 3DCT, MidV, Insp, and Exp median GTV sizes were all significantly larger than the BHCT median GTV size. Conclusions: In the present study, the choice of CT method significantly influenced the delineated GTV size, on average, leading to an increase in GTV size compared to the reference BHCT. The uncertainty caused by artifacts is estimated to be in the same magnitude as delineation uncertainty and should be considered in the design of margins for radiotherapy.

  8. Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  9. Computing the Risk of Postprandial Hypo- and Hyperglycemia in Type 1 Diabetes Mellitus Considering Intrapatient Variability and Other Sources of Uncertainty

    PubMed Central

    García-Jaramillo, Maira; Calm, Remei; Bondia, Jorge; Tarín, Cristina; Vehí, Josep

    2009-01-01

    Objective The objective of this article was to develop a methodology to quantify the risk of suffering different grades of hypo- and hyperglycemia episodes in the postprandial state. Methods Interval predictions of patient postprandial glucose were performed during a 5-hour period after a meal for a set of 3315 scenarios. Uncertainty in the patient's insulin sensitivities and carbohydrate (CHO) contents of the planned meal was considered. A normalized area under the curve of the worst-case predicted glucose excursion for severe and mild hypo- and hyperglycemia glucose ranges was obtained and weighted accordingly to their importance. As a result, a comprehensive risk measure was obtained. A reference model of preprandial glucose values representing the behavior in different ranges was chosen by a ξ2 test. The relationship between the computed risk index and the probability of occurrence of events was analyzed for these reference models through 19,500 Monte Carlo simulations. Results The obtained reference models for each preprandial glucose range were 100, 160, and 220 mg/dl. A relationship between the risk index ranges <10, 10–60, 60–120, and >120 and the probability of occurrence of mild and severe postprandial hyper- and hypoglycemia can be derived. Conclusions When intrapatient variability and uncertainty in the CHO content of the meal are considered, a safer prediction of possible hyper- and hypoglycemia episodes induced by the tested insulin therapy can be calculated. PMID:20144339

  10. Exploring the Impact of Nuclear Data Uncertainties in Ultra-high Resolution Gamma Spectroscopy for Isotopic Analysis Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Burr, T.; Hoover, A.; Croft, S.; Rabin, M.

    2015-01-01

    High purity germanium (HPGe) currently provides the highest readily available resolution gamma detection for a broad range of radiation measurements, but microcalorimetry is a developing option that has considerably higher resolution even than HPGe. Superior microcalorimetry resolution offers the potential to better distinguish closely spaced X-rays and gamma-rays, a common challenge for the low energy spectral region near 100 keV from special nuclear materials, and the higher signal-to-background ratio also confers an advantage in detection limit. As microcalorimetry continues to develop, it is timely to assess the impact of uncertainties in detector and item response functions and in basic nuclear data, such as branching ratios and half-lives, used to interpret spectra in terms of the contributory radioactive isotopes. We illustrate that a new inference option known as approximate Bayesian computation (ABC) is effective and convenient both for isotopic inference and for uncertainty quantification for microcalorimetry. The ABC approach opens a pathway to new and more powerful implementations for practical applications than currently available.

  11. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems

    USGS Publications Warehouse

    Kimball, Heather L.; Selmants, Paul; Moreno, Alvaro; Running Steve W,; Giardina, Christian P.

    2017-01-01

    Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales.

  12. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems.

    PubMed

    Kimball, Heather L; Selmants, Paul C; Moreno, Alvaro; Running, Steve W; Giardina, Christian P

    2017-01-01

    Gross primary production (GPP) is the Earth's largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales.

  13. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems

    PubMed Central

    Selmants, Paul C.; Moreno, Alvaro; Running, Steve W.; Giardina, Christian P.

    2017-01-01

    Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales. PMID:28886187

  14. A computer-aided approach to compare the production economics of fed-batch and perfusion culture under uncertainty.

    PubMed

    Lim, Ai Chye; Washbrook, John; Titchener-Hooker, Nigel John; Farid, Suzanne S

    2006-03-05

    Fed-batch and perfusion culture dominate mammalian cell culture production processes. In this paper, a decision-support tool was employed to evaluate the economic feasibility of both culture modes via a case study based upon the large-scale production of monoclonal antibodies. The trade-offs between the relative simplicity but higher start-up costs of fed-batch processes and the high productivity but higher chances of equipment failure of perfusion processes were analysed. Deterministic analysis showed that whilst there was an insignificant difference (3%) between the cost of goods per gram (COG/g) values, the perfusion option benefited from a 42% reduction in capital investment and a 12% higher projected net present value (NPV). When Monte Carlo simulations were used to account for uncertainties in titre and yield, as well as the risks of contamination and filter fouling, the frequency distributions for the output metrics revealed that neither process route offered the best of both NPV or product output. A product output criterion was formulated and the options that met the criterion were compared based on their reward/risk ratio. The perfusion option was no longer feasible as it failed to meet the product output criterion and the fed-batch option had a 100% higher reward/risk ratio. The tool indicated that in this particular case, the probabilities of contamination and fouling in the perfusion option need to be reduced from 10% to 3% for this option to have the higher reward/risk ratio. The case study highlighted the limitations of relying on deterministic analysis alone.

  15. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    PubMed

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the

  16. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales

    PubMed Central

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.; Ontiveros, Sinué; Tosello, Guido

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component’s calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from

  17. Conundrums with uncertainty factors.

    PubMed

    Cooke, Roger

    2010-03-01

    The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets.

  18. A new surrogate modeling technique combining Kriging and polynomial chaos expansions - Application to uncertainty analysis in computational dosimetry

    NASA Astrophysics Data System (ADS)

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  19. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    SciTech Connect

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  20. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  1. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  2. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  3. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  4. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  5. Uncertainties in successive measurements

    NASA Astrophysics Data System (ADS)

    Distler, Jacques; Paban, Sonia

    2013-06-01

    When you measure an observable, A, in quantum mechanics, the state of the system changes. This, in turn, affects the quantum-mechanical uncertainty in some noncommuting observable, B. The standard uncertainty relation puts a lower bound on the uncertainty of B in the initial state. What is relevant for a subsequent measurement of B, however, is the uncertainty of B in the postmeasurement state. We re-examine this problem, both in the case where A has a pure point spectrum and in the case where A has a continuous spectrum. In the latter case, the need to include a finite detector resolution, as part of what it means to measure such an observable, has dramatic implications for the result of successive measurements. Ozawa, [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.67.042105 67, 042105 (2003)] proposed an inequality satisfied in the case of successive measurements. Among our results, we show that his inequality is ineffective (can never come close to being saturated). For the cases of interest, we compute a sharper lower bound.

  6. The parameter uncertainty inflation fallacy.

    PubMed

    Pernot, Pascal

    2017-09-14

    Statistical estimation of the prediction uncertainty of physical models is typically hindered by the inadequacy of these models due to various approximations they are built upon. The prediction errors caused by model inadequacy can be handled either by correcting the model's results or by adapting the model's parameter uncertainty to generate prediction uncertainties representative, in a way to be defined, of model inadequacy errors. The main advantage of the latter approach (thereafter called PUI, for Parameter Uncertainty Inflation) is its transferability to the prediction of other quantities of interest based on the same parameters. A critical review of implementations of PUI in several areas of computational chemistry shows that it is biased, in the sense that it does not produce prediction uncertainty bands conforming to model inadequacy errors.

  7. The parameter uncertainty inflation fallacy

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal

    2017-09-01

    Statistical estimation of the prediction uncertainty of physical models is typically hindered by the inadequacy of these models due to various approximations they are built upon. The prediction errors caused by model inadequacy can be handled either by correcting the model's results or by adapting the model's parameter uncertainty to generate prediction uncertainties representative, in a way to be defined, of model inadequacy errors. The main advantage of the latter approach (thereafter called PUI, for Parameter Uncertainty Inflation) is its transferability to the prediction of other quantities of interest based on the same parameters. A critical review of implementations of PUI in several areas of computational chemistry shows that it is biased, in the sense that it does not produce prediction uncertainty bands conforming to model inadequacy errors.

  8. Orbital State Uncertainty Realism

    NASA Astrophysics Data System (ADS)

    Horwood, J.; Poore, A. B.

    2012-09-01

    Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten

  9. Pharmacological Fingerprints of Contextual Uncertainty

    PubMed Central

    Ruge, Diane; Stephan, Klaas E.

    2016-01-01

    Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses. PMID:27846219

  10. How Uncertain is Uncertainty?

    NASA Astrophysics Data System (ADS)

    Vámos, Tibor

    The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.

  11. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This

  12. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  13. Generalized Uncertainty Principle and angular momentum

    NASA Astrophysics Data System (ADS)

    Bosso, Pasquale; Das, Saurya

    2017-08-01

    Various models of quantum gravity suggest a modification of the Heisenberg's Uncertainty Principle, to the so-called Generalized Uncertainty Principle, between position and momentum. In this work we show how this modification influences the theory of angular momentum in Quantum Mechanics. In particular, we compute Planck scale corrections to angular momentum eigenvalues, the hydrogen atom spectrum, the Stern-Gerlach experiment and the Clebsch-Gordan coefficients. We also examine effects of the Generalized Uncertainty Principle on multi-particle systems.

  14. Uncertainty Quantification for Airfoil Icing

    NASA Astrophysics Data System (ADS)

    DeGennaro, Anthony Matteo

    Ensuring the safety of airplane flight in icing conditions is an important and active arena of research in the aerospace community. Notwithstanding the research, development, and legislation aimed at certifying airplanes for safe operation, an analysis of the effects of icing uncertainties on certification quantities of interest is generally lacking. The central objective of this thesis is to examine and analyze problems in airfoil ice accretion from the standpoint of uncertainty quantification. We focus on three distinct areas: user-informed, data-driven, and computational uncertainty quantification. In the user-informed approach to uncertainty quantification, we discuss important canonical icing classifications and show how these categories can be modeled using a few shape parameters. We then investigate the statistical effects of these parameters. In the data-driven approach, we build statistical models of airfoil ice shapes from databases of actual ice shapes, and quantify the effects of these parameters. Finally, in the computational approach, we investigate the effects of uncertainty in the physics of the ice accretion process, by perturbing the input to an in-house numerical ice accretion code that we develop in this thesis.

  15. Uncertainty of empirical correlation equations

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.

    2016-08-01

    The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.

  16. Uncertainty and Engagement with Learning Games

    ERIC Educational Resources Information Center

    Howard-Jones, Paul A.; Demetriou, Skevi

    2009-01-01

    Uncertainty may be an important component of the motivation provided by learning games, especially when associated with gaming rather than learning. Three studies are reported that explore the influence of gaming uncertainty on engagement with computer-based learning games. In the first study, children (10-11 years) played a simple maths quiz.…

  17. Uncertainty and Engagement with Learning Games

    ERIC Educational Resources Information Center

    Howard-Jones, Paul A.; Demetriou, Skevi

    2009-01-01

    Uncertainty may be an important component of the motivation provided by learning games, especially when associated with gaming rather than learning. Three studies are reported that explore the influence of gaming uncertainty on engagement with computer-based learning games. In the first study, children (10-11 years) played a simple maths quiz.…

  18. The uncertainties in estimating measurement uncertainties

    SciTech Connect

    Clark, J.P.; Shull, A.H.

    1994-07-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties.

  19. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi

  20. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi

  1. Collaborative Project: The problem of bias in defining uncertainty in computationally enabled strategies for data-driven climate model development. Final Technical Report.

    SciTech Connect

    Huerta, Gabriel

    2016-05-10

    The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projections of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.

  2. Uncertainty as knowledge

    PubMed Central

    Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.

    2015-01-01

    This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108

  3. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  4. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  5. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  6. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz.

    PubMed

    Laakso, Ilkka

    2009-06-07

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m(-2) was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 degrees C in the whole frequency range.

  7. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka

    2009-06-01

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.

  8. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  9. Coastal 'Big Data' and nature-inspired computation: Prediction potentials, uncertainties, and knowledge derivation of neural networks for an algal metric

    NASA Astrophysics Data System (ADS)

    Millie, David F.; Weckman, Gary R.; Young, William A.; Ivey, James E.; Fries, David P.; Ardjmand, Ehsan; Fahnenstiel, Gary L.

    2013-07-01

    Coastal monitoring has become reliant upon automated sensors for data acquisition. Such a technical commitment comes with a cost; particularly, the generation of large, high-dimensional data streams ('Big Data') that personnel must search through to identify data structures. Nature-inspired computation, inclusive of artificial neural networks (ANNs), affords the unearthing of complex, recurring patterns within sizable data volumes. In 2009, select meteorological and hydrological data were acquired via autonomous instruments in Sarasota Bay, Florida (USA). ANNs estimated continuous chlorophyll (CHL) a concentrations from abiotic predictors, with correlations between measured:modeled concentrations >0.90 and model efficiencies ranging from 0.80 to 0.90. Salinity and water temperature were the principal influences for modeled CHL within the Bay; concentrations steadily increased at temperatures >28° C and were greatest at salinities <36 (maximizing at ca. 35.3). Categorical ANNs modeled CHL classes of 6.1 and 11 μg CHL L-1 (representative of local and state-imposed constraint thresholds, respectively), with an accuracy of ca. 83% and class precision ranging from 0.79 to 0.91. The occurrence likelihood of concentrations > 6.1 μg CHL L-1 maximized at a salinity of ca. 36.3 and a temperature of ca. 29.5 °C. A 10th-order Chebyshev bivariate polynomial equation was fit (adj. r2 = 0.99, p < 0.001) to a three-dimensional response surface portraying modeled CHL concentrations, conditional to the temperature-salinity interaction. The TREPAN algorithm queried a continuous ANN to extract a decision tree for delineation of CHL classes; turbidity, temperature, and salinity (and to lesser degrees, wind speed, wind/current direction, irradiance, and urea-nitrogen) were key variables for quantitative rules in tree formalisms. Taken together, computations enabled knowledge provision for and quantifiable representations of the non-linear relationships between environmental

  10. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  11. Uncertainty Propagation for Terrestrial Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas

    2016-06-01

    Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.

  12. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM. OPERATIONAL MANUAL.

    EPA Science Inventory

    MOUSE (Modular Oriented Uncertainty SystEm) deals with the problem of uncertainties in models that consist of one or more algebraic equations. It was especially designed for use by those with little or no knowledge of computer languages or programming. It is compact (and thus can...

  13. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM. OPERATIONAL MANUAL.

    EPA Science Inventory

    MOUSE (Modular Oriented Uncertainty SystEm) deals with the problem of uncertainties in models that consist of one or more algebraic equations. It was especially designed for use by those with little or no knowledge of computer languages or programming. It is compact (and thus can...

  14. Uncertainty estimation and prediction for interdisciplinary ocean dynamics

    SciTech Connect

    Lermusiaux, Pierre F.J. . E-mail: pierrel@pacific.harvard.edu

    2006-09-01

    Scientific computations for the quantification, estimation and prediction of uncertainties for ocean dynamics are developed and exemplified. Primary characteristics of ocean data, models and uncertainties are reviewed and quantitative data assimilation concepts defined. Challenges involved in realistic data-driven simulations of uncertainties for four-dimensional interdisciplinary ocean processes are emphasized. Equations governing uncertainties in the Bayesian probabilistic sense are summarized. Stochastic forcing formulations are introduced and a new stochastic-deterministic ocean model is presented. The computational methodology and numerical system, Error Subspace Statistical Estimation, that is used for the efficient estimation and prediction of oceanic uncertainties based on these equations is then outlined. Capabilities of the ESSE system are illustrated in three data-assimilative applications: estimation of uncertainties for physical-biogeochemical fields, transfers of ocean physics uncertainties to acoustics, and real-time stochastic ensemble predictions with assimilation of a wide range of data types. Relationships with other modern uncertainty quantification schemes and promising research directions are discussed.

  15. Uncertainty in audiometer calibration

    NASA Astrophysics Data System (ADS)

    Aurélio Pedroso, Marcos; Gerges, Samir N. Y.; Gonçalves, Armando A., Jr.

    2004-02-01

    The objective of this work is to present a metrology study necessary for the accreditation of audiometer calibration procedures at the National Brazilian Institute of Metrology Standardization and Industrial Quality—INMETRO. A model for the calculation of measurement uncertainty was developed. Metrological aspects relating to audiometer calibration, traceability and measurement uncertainty were quantified through comparison between results obtained at the Industrial Noise Laboratory—LARI of the Federal University of Santa Catarina—UFSC and the Laboratory of Electric/acoustics—LAETA of INMETRO. Similar metrological performance of the measurement system used in both laboratories was obtained, indicating that the interlaboratory results are compatible with the expected values. The uncertainty calculation was based on the documents: EA-4/02 Expression of the Uncertainty of Measurement in Calibration (European Co-operation for Accreditation 1999 EA-4/02 p 79) and Guide to the Expression of Uncertainty in Measurement (International Organization for Standardization 1993 1st edn, corrected and reprinted in 1995, Geneva, Switzerland). Some sources of uncertainty were calculated theoretically (uncertainty type B) and other sources were measured experimentally (uncertainty type A). The global value of uncertainty calculated for the sound pressure levels (SPLs) is similar to that given by other calibration institutions. The results of uncertainty related to measurements of SPL were compared with the maximum uncertainties Umax given in the standard IEC 60645-1: 2001 (International Electrotechnical Commission 2001 IEC 60645-1 Electroacoustics—Audiological Equipment—Part 1:—Pure-Tone Audiometers).

  16. Uncertainty quantification for holographic interferographic images

    NASA Astrophysics Data System (ADS)

    Centauri, Laurie Ann

    Current comparison methods for experimental and simulated holographic interferometric images are qualitative in nature. Previous comparisons of holographic interferometric images with computational fluid dynamics (CFD) simulations for validation have been performed qualitatively through visual comparison by a data analyst. By validating the experiments and CFD simulations in a quantifiable manner using a consistency analysis, the validation becomes a repeatable process that gives a consistency measure and a range of inputs over which the experiments and CFD simulations give consistent results. The quantification of uncertainty in four holographic interferometric experiments was performed for use in a data collaboration with CFD simulations for the purpose of validation. The model uncertainty from image-processing, the measurement uncertainty from experimental data variation, and the scenario uncertainty from the bias and parameter uncertainty was quantified. The scenario uncertainty was determined through comparison with an analytical solution at the helium inlet (height, x = 0), including the uncertainty in the experimental parameters from historical weather data. The model uncertainty was calculated through a Box-Behnkin sensitivity analysis on three image-processing code parameters. Measurement uncertainty was determined through a statistical analysis to determine the time-average and standard deviation in the interference fringe positions. An experimental design matrix of CFD simulations was performed by Weston Eldredge using a Box-Behnkin design with helium velocity, temperature, and air co-flow velocity as parameters in conjunction to provide simulated measurements for the data collaboration Data set. Over 3,200 holographic interferometric images were processed through the course of this study. When each permutation of these images is taken into account through all the image-processing steps, the total number of images processed is over 13,000. Probability

  17. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  18. Uncertainty Quantification in Climate Modeling

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.

    2011-12-01

    We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis

  19. Analysis of the Uncertainty in the Computation of Receiver Functions and Improvement in the Estimation of Receiver, PP and SS functions

    NASA Astrophysics Data System (ADS)

    Huang, X.; Gurrola, H.

    2013-12-01

    methods. All of these methods performed well in terms of stdev but we chose ARU for its high quality data and low signal to noise ratios (the average S/N ratio for these data were 4%). With real data, we tend to assume the method that has the lowest stdev is the best. But stdev does not account for a systematic bias toward incorrect values. In this case the LSD once again had the lowest stdev in computed amplitudes of Pds phases but it had the smallest values. But the FID, FWLD and MID tended to produce the largest amplitude while the LSD and TID tended toward the lower amplitudes. Considering that in the synthetics all these methods showed bias toward low amplitude, we believe that with real data those methods producing the largest amplitudes will be closest to the 'true values' and that is a better measure of the better method than a small stdev in amplitude estimates. We will also present results for applying TID and FID methods to the production of PP and SS precursor functions. When applied to these data, it is possible to moveout correct the cross-correlation functions before extracting the signal from each PdP (or SdS) phase in these data. As a result a much cleaner Earth function is produced and feequency content is significantly improved.

  20. Use and uncertainties of mutual information for computed tomography/ magnetic resonance (CT/MR) registration post permanent implant of the prostate.

    PubMed

    Roberson, Peter L; McLaughlin, P William; Narayana, Vrinda; Troyer, Sara; Hixson, George V; Kessler, Marc L

    2005-02-01

    Post-implant dosimetric analysis for permanent implant of the prostate benefits from the use of a computed tomography (CT) dataset for optimal identification of the radioactive source (seed) positions and a magnetic resonance (MR) dataset for optimal description of the target and normal tissue volumes. The CT/MR registration process should be fast and sufficiently accurate to yield a reliable dosimetric analysis. Since critical normal tissues typically reside in dose gradient regions, small shifts in the dose distribution could impact the prediction of complication or complication severity. Standard procedures include the use of the seed distribution as fiducial markers (seed match), a time consuming process that relies on the proper identification of signals due to the same seed on both datasets. Mutual information (MI) is more efficient because it uses image data requiring minimal preparation effort. A comparison of MI registration and seed-match registration was performed for twelve patients. MI was applied to a volume limited to the prostate and surrounding structures, excluding most of the pelvic bone structures (margins around the prostate gland were approximately 2 cm right-left, approximately 1 cm anterior-posterior, and approximately 2 cm superior-inferior). Seeds were identified on a 2 mm slice CT dataset using an automatic seed identification procedure on reconstructed three-dimensional data. Seed positions on the 3 mm slice thickness T2 MR data set were identified using a point-and-click method on each image. Seed images were identified on more than one MR slice, and the results used to determine average seed coordinates for MR images and matched seed pairs between CT and MR images. On average, 42% (19%-64%) of the seeds (19-54 seeds) were identified and matched to their CT counterparts. A least-squares method applied to the CT and MR seed coordinates was used to produce the optimum seed-match registration. MI registration and seed match registration

  1. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  2. Tighter uncertainty and reverse uncertainty relations

    NASA Astrophysics Data System (ADS)

    Mondal, Debasis; Bagchi, Shrobona; Pati, Arun Kumar

    2017-05-01

    We prove a few state-dependent uncertainty relations for the product as well as the sum of variances of two incompatible observables. These uncertainty relations are shown to be tighter than the Robertson-Schrödinger uncertainty relation and other ones existing in the current literature. Also, we derive a state-dependent upper bound to the sum and the product of variances using the reverse Cauchy-Schwarz inequality and the Dunkl-Williams inequality. Our results suggest that not only can we not prepare quantum states for which two incompatible observables can have sharp values, but also we have both the lower and the upper limits on the variances of quantum mechanical observables at a fundamental level.

  3. Addressing biological uncertainties in engineering gene circuits.

    PubMed

    Zhang, Carolyn; Tsoi, Ryan; You, Lingchong

    2016-04-18

    Synthetic biology has grown tremendously over the past fifteen years. It represents a new strategy to develop biological understanding and holds great promise for diverse practical applications. Engineering of a gene circuit typically involves computational design of the circuit, selection of circuit components, and test and optimization of circuit functions. A fundamental challenge in this process is the predictable control of circuit function due to multiple layers of biological uncertainties. These uncertainties can arise from different sources. We categorize these uncertainties into incomplete quantification of parts, interactions between heterologous components and the host, or stochastic dynamics of chemical reactions and outline potential design strategies to minimize or exploit them.

  4. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; McMillan, H. K.

    2015-09-01

    Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.

  5. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  6. MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.

    SciTech Connect

    Hemez, F.M.; Doebling, S.W.

    2000-10-01

    This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.

  7. [Ethics, empiricism and uncertainty].

    PubMed

    Porz, R; Zimmermann, H; Exadaktylos, A K

    2011-01-01

    Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine.

  8. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  9. Impact of uncertainty on modeling and testing

    NASA Technical Reports Server (NTRS)

    Coleman, Hugh W.; Brown, Kendall K.

    1995-01-01

    A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.

  10. Error models for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Josset, L.; Scheidt, C.; Lunati, I.

    2012-12-01

    In groundwater modeling, uncertainty on the permeability field leads to a stochastic description of the aquifer system, in which the quantities of interests (e.g., groundwater fluxes or contaminant concentrations) are considered as stochastic variables and described by their probability density functions (PDF) or by a finite number of quantiles. Uncertainty quantification is often evaluated using Monte-Carlo simulations, which employ a large number of realizations. As this leads to prohibitive computational costs, techniques have to be developed to keep the problem computationally tractable. The Distance-based Kernel Method (DKM) [1] limits the computational cost of the uncertainty quantification by reducing the stochastic space: first, the realizations are clustered based on the response of a proxy; then, the full model is solved only for a subset of realizations defined by the clustering and the quantiles are estimated from this limited number of realizations. Here, we present a slightly different strategy that employs an approximate model rather than a proxy: we use the Multiscale Finite Volume method (MsFV) [2,3] to compute an approximate solution for each realization, and to obtain a first assessment of the PDF. In this context, DKM is then used to identify a subset of realizations for which the exact model is solved and compared with the solution of the approximate model. This allows highlighting and correcting possible errors introduced by the approximate model, while keeping full statistical information on the ensemble of realizations. Here, we test several strategies to compute the model error, correct the approximate model and achieve an optimal PDF estimation. We present a case study in which we predict the breakthrough curve of an ideal tracer for an ensemble of realizations generated via Multiple Point Direct Sampling [4] with a training image obtained from a 2D section of the Herten permeability field [5]. [1] C. Scheidt and J. Caers, "Representing

  11. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    SciTech Connect

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an

  12. Uncertainty in flood risk mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo

    2014-05-01

    A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow

  13. Electoral Knowledge and Uncertainty.

    ERIC Educational Resources Information Center

    Blood, R. Warwick; And Others

    Research indicates that the media play a role in shaping the information that voters have about election options. Knowledge of those options has been related to actual vote, but has not been shown to be strongly related to uncertainty. Uncertainty, however, does seem to motivate voters to engage in communication activities, some of which may…

  14. Intolerance of Uncertainty

    PubMed Central

    Beier, Meghan L.

    2015-01-01

    Multiple sclerosis (MS) is a chronic and progressive neurologic condition that, by its nature, carries uncertainty as a hallmark characteristic. Although all patients face uncertainty, there is variability in how individuals cope with its presence. In other populations, the concept of “intolerance of uncertainty” has been conceptualized to explain this variability such that individuals who have difficulty tolerating the possibility of future occurrences may engage in thoughts or behaviors by which they attempt to exert control over that possibility or lessen the uncertainty but may, as a result, experience worse outcomes, particularly in terms of psychological well-being. This topical review introduces MS-focused researchers, clinicians, and patients to intolerance of uncertainty, integrates the concept with what is already understood about coping with MS, and suggests future steps for conceptual, assessment, and treatment-focused research that may benefit from integrating intolerance of uncertainty as a central feature. PMID:26300700

  15. Economic uncertainty and econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, Christophe

    2009-10-01

    The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.

  16. Physical Uncertainty Bounds (PUB)

    SciTech Connect

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  17. Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture

    DTIC Science & Technology

    2016-06-07

    for modeling and computing the distribution of the uncertainty in Signal Excess (SE) prediction for multistatic active detection of submarines...resulting from uncertainty in environmental predictions , and (2) to develop methods for accounting for this uncertainty in a Likelihood Ratio Tracker (LRT...APPROACH We characterized and quantified the uncertainty in the environmental predictions for the components of the sonar equation for

  18. Fragility, uncertainty, and healthcare.

    PubMed

    Rogers, Wendy A; Walker, Mary J

    2016-02-01

    Medicine seeks to overcome one of the most fundamental fragilities of being human, the fragility of good health. No matter how robust our current state of health, we are inevitably susceptible to future illness and disease, while current disease serves to remind us of various frailties inherent in the human condition. This article examines the relationship between fragility and uncertainty with regard to health, and argues that there are reasons to accept rather than deny at least some forms of uncertainty. In situations of current ill health, both patients and doctors seek to manage this fragility through diagnoses that explain suffering and provide some certainty about prognosis as well as treatment. However, both diagnosis and prognosis are inevitably uncertain to some degree, leading to questions about how much uncertainty health professionals should disclose, and how to manage when diagnosis is elusive, leaving patients in uncertainty. We argue that patients can benefit when they are able to acknowledge, and appropriately accept, some uncertainty. Healthy people may seek to protect the fragility of their good health by undertaking preventative measures including various tests and screenings. However, these attempts to secure oneself against the onset of biological fragility can cause harm by creating rather than eliminating uncertainty. Finally, we argue that there are good reasons for accepting the fragility of health, along with the associated uncertainties.

  19. Quantum preparation uncertainty and lack of information

    NASA Astrophysics Data System (ADS)

    Rozpędek, Filip; Kaniewski, Jędrzej; Coles, Patrick J.; Wehner, Stephanie

    2017-02-01

    The quantum uncertainty principle famously predicts that there exist measurements that are inherently incompatible, in the sense that their outcomes cannot be predicted simultaneously. In contrast, no such uncertainty exists in the classical domain, where all uncertainty results from ignorance about the exact state of the physical system. Here, we critically examine the concept of preparation uncertainty and ask whether similarly in the quantum regime, some of the uncertainty that we observe can actually also be understood as a lack of information (LOI), albeit a lack of quantum information. We answer this question affirmatively by showing that for the well known measurements employed in BB84 quantum key distribution (Bennett and Brassard 1984 Int. Conf. on Computer System and Signal Processing), the amount of uncertainty can indeed be related to the amount of available information about additional registers determining the choice of the measurement. We proceed to show that also for other measurements the amount of uncertainty is in part connected to a LOI. Finally, we discuss the conceptual implications of our observation to the security of cryptographic protocols that make use of BB84 states.

  20. Information-theoretic approach to uncertainty importance

    SciTech Connect

    Park, C.K.; Bari, R.A.

    1985-01-01

    A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results.

  1. Utilizing general information theories for uncertainty quantification

    SciTech Connect

    Booker, J. M.

    2002-01-01

    Uncertainties enter into a complex problem from many sources: variability, errors, and lack of knowledge. A fundamental question arises in how to characterize the various kinds of uncertainty and then combine within a problem such as the verification and validation of a structural dynamics computer model, reliability of a dynamic system, or a complex decision problem. Because uncertainties are of different types (e.g., random noise, numerical error, vagueness of classification), it is difficult to quantify all of them within the constructs of a single mathematical theory, such as probability theory. Because different kinds of uncertainty occur within a complex modeling problem, linkages between these mathematical theories are necessary. A brief overview of some of these theories and their constituents under the label of Generalized lnforrnation Theory (GIT) is presented, and a brief decision example illustrates the importance of linking at least two such theories.

  2. Mutually Exclusive Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan

    2016-01-01

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds. PMID:27824161

  3. Mutually Exclusive Uncertainty Relations.

    PubMed

    Xiao, Yunlong; Jing, Naihuan

    2016-11-08

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds.

  4. Mutually Exclusive Uncertainty Relations

    NASA Astrophysics Data System (ADS)

    Xiao, Yunlong; Jing, Naihuan

    2016-11-01

    The uncertainty principle is one of the characteristic properties of quantum theory based on incompatibility. Apart from the incompatible relation of quantum states, mutually exclusiveness is another remarkable phenomenon in the information- theoretic foundation of quantum theory. We investigate the role of mutual exclusive physical states in the recent work of stronger uncertainty relations for all incompatible observables by Mccone and Pati and generalize the weighted uncertainty relation to the product form as well as their multi-observable analogues. The new bounds capture both incompatibility and mutually exclusiveness, and are tighter compared with the existing bounds.

  5. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  6. A surrogate-based uncertainty quantification with quantifiable errors

    SciTech Connect

    Bang, Y.; Abdel-Khalik, H. S.

    2012-07-01

    Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)

  7. Predictive uncertainty in auditory sequence processing

    PubMed Central

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  8. Assessment of SFR Wire Wrap Simulation Uncertainties

    SciTech Connect

    Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David; Swiler, Laura P.

    2016-09-30

    Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results

  9. Communicating scientific uncertainty

    PubMed Central

    Fischhoff, Baruch; Davis, Alex L.

    2014-01-01

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390

  10. Communicating scientific uncertainty.

    PubMed

    Fischhoff, Baruch; Davis, Alex L

    2014-09-16

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science.

  11. Uncertainty in chemistry.

    PubMed

    Menger, Fredric M

    2010-09-01

    It might come as a disappointment to some chemists, but just as there are uncertainties in physics and mathematics, there are some chemistry questions we may never know the answer to either, suggests Fredric M. Menger.

  12. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  13. Uncertainty in QSAR predictions.

    PubMed

    Sahlin, Ullrika

    2013-03-01

    It is relevant to consider uncertainty in individual predictions when quantitative structure-activity (or property) relationships (QSARs) are used to support decisions of high societal concern. Successful communication of uncertainty in the integration of QSARs in chemical safety assessment under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system can be facilitated by a common understanding of how to define, characterise, assess and evaluate uncertainty in QSAR predictions. A QSAR prediction is, compared to experimental estimates, subject to added uncertainty that comes from the use of a model instead of empirically-based estimates. A framework is provided to aid the distinction between different types of uncertainty in a QSAR prediction: quantitative, i.e. for regressions related to the error in a prediction and characterised by a predictive distribution; and qualitative, by expressing our confidence in the model for predicting a particular compound based on a quantitative measure of predictive reliability. It is possible to assess a quantitative (i.e. probabilistic) predictive distribution, given the supervised learning algorithm, the underlying QSAR data, a probability model for uncertainty and a statistical principle for inference. The integration of QSARs into risk assessment may be facilitated by the inclusion of the assessment of predictive error and predictive reliability into the "unambiguous algorithm", as outlined in the second OECD principle.

  14. Uncertainty Analysis for a Jet Flap Airfoil

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Cruz, Josue

    2006-01-01

    An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.

  15. Calculational methodology and associated uncertainties: Sensitivity and uncertainty analysis of reactor performance parameters

    SciTech Connect

    Kujawski, E.; Weisbin, C.R.

    1982-01-01

    This chapter considers the calculational methodology and associated uncertainties both for the design of large LMFBR's and the analysis of critical assemblies (fast critical experiments) as performed by several groups within the US. Discusses cross-section processing; calculational methodology for the design problem; core physics computations; design-oriented approximations; benchmark analyses; and determination of calculational corrections and associated uncertainties for a critical assembly. Presents a detailed analysis of the sources of calculational uncertainties for the critical assembly ZPR-6/7 to illustrate the quantitative assessment of calculational correction factors and uncertainties. Examines calculational uncertainties that arise from many different sources including intrinsic limitations of computational methods; design-oriented approximations related to reactor modeling; computational capability and code availability; economic limitations; and the skill of the reactor analyst. Emphasizes that the actual design uncertainties in most of the parameters, with the possible exception of burnup, are likely to be less than might be indicated by the results presented in this chapter because reactor designers routinely apply bias factors (usually derived from critical experiments) to their calculated results.

  16. Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.

  17. Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures

    SciTech Connect

    Petitpas, Guillaume; Whitesides, Russell

    2016-12-15

    UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.

  18. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  19. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  20. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a

  1. Interpreting uncertainty terms.

    PubMed

    Holtgraves, Thomas

    2014-08-01

    Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.

  2. Random Field Solutions Including Boundary Condition Uncertainty for the Steady-state Generalized Burgers Equation

    DTIC Science & Technology

    2001-10-01

    uncertainty. The need for advanced uncertainty modeling is illustrated by means of a computationally inexpensive 1-D Burgers equation model. We specifically...discusses the various sources of uncertainty. The need for advanced uncertainty modeling is illustrated by means of a computationally inexpensive 1-D...two types can be caused by either “errors of ignorance” or “errors of simplification”. Note that use of the word “error” means that these

  3. SU(2) uncertainty limits

    NASA Astrophysics Data System (ADS)

    Shabbir, Saroosh; Björk, Gunnar

    2016-05-01

    Although progress has been made recently in defining nontrivial uncertainty limits for the SU(2) group, a description of the intermediate states bound by these limits remains lacking. In this paper we enumerate possible uncertainty relations for the SU(2) group that involve all three observables and that are, moreover, invariant under SU(2) transformations. We demonstrate that these relations however, even taken as a group, do not provide sharp, saturable bounds. To find sharp bounds, we systematically calculate the variance of the SU(2) operators for all pure states belonging to the N =2 and N =3 polarization excitation manifold (corresponding to spin 1 and spin 3/2). Lastly, and perhaps counter to expectation, we note that even pure states can reach the maximum uncertainty limit.

  4. Measurement uncertainty relations

    SciTech Connect

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  5. The legacy of uncertainty

    NASA Technical Reports Server (NTRS)

    Brown, Laurie M.

    1993-01-01

    An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.

  6. A Certain Uncertainty

    NASA Astrophysics Data System (ADS)

    Silverman, Mark P.

    2014-07-01

    1. Tools of the trade; 2. The 'fundamental problem' of a practical physicist; 3. Mother of all randomness I: the random disintegration of matter; 4. Mother of all randomness II: the random creation of light; 5. A certain uncertainty; 6. Doing the numbers: nuclear physics and the stock market; 7. On target: uncertainties of projectile flight; 8. The guesses of groups; 9. The random flow of energy I: power to the people; 10. The random flow of energy II: warning from the weather underground; Index.

  7. Uncertainty and Dimensional Calibrations.

    PubMed

    Doiron, Ted; Stoup, John

    1997-01-01

    The calculation of uncertainty for a measurement is an effort to set reasonable bounds for the measurement result according to standardized rules. Since every measurement produces only an estimate of the answer, the primary requisite of an uncertainty statement is to inform the reader of how sure the writer is that the answer is in a certain range. This report explains how we have implemented these rules for dimensional calibrations of nine different types of gages: gage blocks, gage wires, ring gages, gage balls, roundness standards, optical flats indexing tables, angle blocks, and sieves.

  8. Uncertainty and Dimensional Calibrations

    PubMed Central

    Doiron, Ted; Stoup, John

    1997-01-01

    The calculation of uncertainty for a measurement is an effort to set reasonable bounds for the measurement result according to standardized rules. Since every measurement produces only an estimate of the answer, the primary requisite of an uncertainty statement is to inform the reader of how sure the writer is that the answer is in a certain range. This report explains how we have implemented these rules for dimensional calibrations of nine different types of gages: gage blocks, gage wires, ring gages, gage balls, roundness standards, optical flats indexing tables, angle blocks, and sieves. PMID:27805114

  9. Weighted Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming

    2016-01-01

    Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295

  10. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    SciTech Connect

    Campos, E; Sisterson, Douglas

    2016-12-01

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty

  11. Quantification and Propagation of Nuclear Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Rising, Michael E.

    The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output

  12. Uncertainty assessment of synthetic design hydrographs

    NASA Astrophysics Data System (ADS)

    Brunner, Manuela Irene; Sikorska, Anna E.; Viviroli, Daniel; Seibert, Jan; Favre, Anne-Catherine

    2017-04-01

    uncertainty must be considered when design hydrographs need to be regionalized to catchments without runoff observations. The quantification of the uncertainty coming from the choice of the regionalization method was quantified by computing regionalized design hydrographs using different methods. The uncertainty introduced by regionalization was considerable, especially for the magnitude of the event.

  13. Uncertainty in NIST Force Measurements.

    PubMed

    Bartel, Tom

    2005-01-01

    This paper focuses upon the uncertainty of force calibration measurements at the National Institute of Standards and Technology (NIST). The uncertainty of the realization of force for the national deadweight force standards at NIST is discussed, as well as the uncertainties associated with NIST's voltage-ratio measuring instruments and with the characteristics of transducers being calibrated. The combined uncertainty is related to the uncertainty of dissemination for force transfer standards sent to NIST for calibration.

  14. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    SciTech Connect

    Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  15. Uncertainties in repository modeling

    SciTech Connect

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  16. Uncertainties and recommendations.

    PubMed

    Callaghan, Terry V; Björn, Lars Olof; Chernov, Yuri; Chapin, Terry; Christensen, Torben R; Huntley, Brian; Ims, Rolf A; Johansson, Margareta; Jolly, Dyanna; Jonasson, Sven; Matveyeva, Nadya; Panikov, Nicolai; Oechel, Walter; Shaver, Gus

    2004-11-01

    An assessment of the impacts of changes in climate and UV-B radiation on Arctic terrestrial ecosystems, made within the Arctic Climate Impacts Assessment (ACIA), highlighted the profound implications of projected warming in particular for future ecosystem services, biodiversity and feedbacks to climate. However, although our current understanding of ecological processes and changes driven by climate and UV-B is strong in some geographical areas and in some disciplines, it is weak in others. Even though recently the strength of our predictions has increased dramatically with increased research effort in the Arctic and the introduction of new technologies, our current understanding is still constrained by various uncertainties. The assessment is based on a range of approaches that each have uncertainties, and on data sets that are often far from complete. Uncertainties arise from methodologies and conceptual frameworks, from unpredictable surprises, from lack of validation of models, and from the use of particular scenarios, rather than predictions, of future greenhouse gas emissions and climates. Recommendations to reduce the uncertainties are wide-ranging and relate to all disciplines within the assessment. However, a repeated theme is the critical importance of achieving an adequate spatial and long-term coverage of experiments, observations and monitoring of environmental changes and their impacts throughout the sparsely populated and remote region that is the Arctic.

  17. Uncertainties in Transfer Impedance Calculations

    NASA Astrophysics Data System (ADS)

    Schippers, H.; Verpoorte, J.

    2016-05-01

    The shielding effectiveness of metal braids of cables is governed by the geometry and the materials of the braid. The shielding effectiveness can be characterised by the transfer impedance of the metal braid. Analytical models for the transfer impedance contain in general two components, one representing diffusion of electromagnetic energy through the metal braid, and a second part representing leakage of magnetic fields through the braid. Possible sources of uncertainties in the modelling are inaccurate input data (for instance, the exact size of the braid diameter or wire diameter are not known) and imperfections in the computational model. The aim of the present paper is to estimate effects of variations of input data on the calculated transfer impedance.

  18. Strategy under uncertainty.

    PubMed

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  19. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  20. Uncertainty propagation within source term estimation

    NASA Astrophysics Data System (ADS)

    Rodriguez, Luna Marie

    numerous approaches to the STE problem exist, each with its own strengths and weaknesses, this approach addresses STE in an operational environment where computational resources are limited and a timely solution is critical. VIRSA is an adjoint method, computationally efficient, and fast but like any gradient descent minimization its downfall is that it can fall prey to local minima in the solution space. In the work presented here we incorporate new methods to address issues related to uncertainty and using what we know about that uncertainty to reduce the tendency to find local minima rather than the global minimum. We explore approaches to map the uncertainty in our observations and link it back to the background error covariance matrix utilized by the adjoint minimization.

  1. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  2. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  3. Technical note: Design flood under hydrological uncertainty

    NASA Astrophysics Data System (ADS)

    Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco

    2017-07-01

    Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.

  4. Evaluation of Objective Uncertainty in the Visual System

    PubMed Central

    Barthelmé, Simon; Mamassian, Pascal

    2009-01-01

    The role of sensory systems is to provide an organism with information about its environment. Because sensory information is noisy and insufficient to uniquely determine the environment, natural perceptual systems have to cope with systematic uncertainty. The extent of that uncertainty is often crucial to the organism: for instance, in judging the potential threat in a stimulus. Inducing uncertainty by using visual noise, we had human observers perform a task where they could improve their performance by choosing the less uncertain among pairs of visual stimuli. Results show that observers had access to a reliable measure of visual uncertainty in their decision-making, showing that subjective uncertainty in this case is connected to objective uncertainty. Based on a Bayesian model of the task, we discuss plausible computational schemes for that ability. PMID:19750003

  5. A review of uncertainty propagation in orbital mechanics

    NASA Astrophysics Data System (ADS)

    Luo, Ya-zhong; Yang, Zhen

    2017-02-01

    Orbital uncertainty propagation plays an important role in space situational awareness related missions such as tracking and data association, conjunction assessment, sensor resource management and anomaly detection. Linear models and Monte Carlo simulation were primarily used to propagate uncertainties. However, due to the nonlinear nature of orbital dynamics, problems such as low precision and intensive computation have greatly hampered the application of these methods. Aiming at solving these problems, many nonlinear uncertainty propagators have been proposed in the past two decades. To motivate this research area and facilitate the development of orbital uncertainty propagation, this paper summarizes the existing linear and nonlinear uncertainty propagators and their associated applications in the field of orbital mechanics. Frameworks of methods for orbital uncertainty propagation, the advantages and drawbacks of different methods, as well as potential directions for future efforts are also discussed.

  6. Comment on "A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies" (L. Eça and M. Hoekstra, Journal of Computational Physics 262 (2014) 104-130)

    NASA Astrophysics Data System (ADS)

    Xing, Tao; Stern, Frederick

    2015-11-01

    Eça and Hoekstra [1] proposed a procedure for the estimation of the numerical uncertainty of CFD calculations based on the least squares root (LSR) method. We believe that the LSR method has potential value for providing an extended Richardson-extrapolation solution verification procedure for mixed monotonic and oscillatory or only oscillatory convergent solutions (based on the usual systematic grid-triplet convergence condition R). Current Richardson-extrapolation solution verification procedures [2-7] are restricted to monotonic convergent solutions 0 < R < 1. Procedures for oscillatory convergence simply either use uncertainty estimate based on average maximum minus minimum solutions [8,9] or arbitrarily large factors of safety (FS) [2]. However, in our opinion several issues preclude the usefulness of the presented LSR method: five criticisms follow. The solution verification literature needs technical discussion in order to put the LSR method in context. The LSR method has many options making it very difficult to follow. Fig. 1 provides a block diagram, which summarizes the LSR procedure and options, including some of which we are in disagreement. Compared to the grid-triplet and three-step procedure followed by most solution verification methods (convergence condition followed by error and uncertainty estimates), the LSR method follows a four-grid (minimum) and four-step procedure (error estimate, data range parameter Δϕ, FS, and uncertainty estimate).

  7. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  8. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  9. Optimal test selection for prediction uncertainty reduction

    SciTech Connect

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.

  10. Uncertainty Quantification in High Throughput Screening ...

    EPA Pesticide Factsheets

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  11. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  12. Asymmetric Uncertainty Expression for High Gradient Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T

    2012-01-01

    When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.

  13. Radar stage uncertainty

    USGS Publications Warehouse

    Fulford, J.M.; Davies, W.J.

    2005-01-01

    The U.S. Geological Survey is investigating the performance of radars used for stage (or water-level) measurement. This paper presents a comparison of estimated uncertainties and data for radar water-level measurements with float, bubbler, and wire weight water-level measurements. The radar sensor was also temperature-tested in a laboratory. The uncertainty estimates indicate that radar measurements are more accurate than uncorrected pressure sensors at higher water stages, but are less accurate than pressure sensors at low stages. Field data at two sites indicate that radar sensors may have a small negative bias. Comparison of field radar measurements with wire weight measurements found that the radar tends to measure slightly lower values as stage increases. Copyright ASCE 2005.

  14. Uncertainties in transpiration estimates.

    PubMed

    Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G

    2014-02-13

    arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).

  15. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range

  16. Variants of Uncertainty

    DTIC Science & Technology

    1981-05-15

    Variants of Uncertainty Daniel Kahneman University of British Columbia Amos Tversky Stanford University DTI-C &%E-IECTE ~JUNO 1i 19 8 1j May 15, 1981... Dennett , 1979) in which different parts have ac- cess to different data, assign then different weights and hold different views of the situation...2robable and t..h1 provable. Oxford- Claredor Press, 1977. Dennett , D.C. Brainstorms. Hassocks: Harvester, 1979. Donchin, E., Ritter, W. & McCallum, W.C

  17. Aggregating and Communicating Uncertainty.

    DTIC Science & Technology

    1980-04-01

    means for identifying and communicating uncertainty. i 12- APPENDIX A BIBLIOGRAPHY j| 1. Ajzen , Icek ; "Intuitive Theories of Events and the Effects...disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theo- ries of Events and the Effects of Base-Rate Information on Prediction...9 4i,* ,4.. -. .- S % to the criterion while disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theories of Events and the Effects

  18. Uncertainties in climate stabilization

    SciTech Connect

    Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.

    2009-11-01

    We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.

  19. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  20. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in

  1. Transforming Binary Uncertainties for Robust Speech Recognition

    DTIC Science & Technology

    2006-08-01

    1-0117), an AFRL grant via Veridian and an NSF grant (IIS-0534707). We thank A. Acero and M. L. Seltzer for helpful suggestions. A preliminary...Deng, J. Droppo, and A. Acero , “Dynamic compensation of HMM variances using the feature enhancement uncertainty computed from a parametric model of...amplitude modulation,” IEEE Trans. on Neural Networks, vol. 15, pp. 1135–1150, 2004. [16] X. Huang, A. Acero , and H. Hon, Spoken Language Processing

  2. On estimation of uncertainties in analog measurements

    SciTech Connect

    Adibi, M.M. ); Stovall, J.P. )

    1990-11-01

    Computer control of power systems require evaluation of uncertainties in analog measurements and their reduction to a level that allows satisfactory control. In this paper a range of measurements is obtained from a substation to span peak- and light-load conditions and to include bus voltages, phase angles and line flows. Then the redundancies in measurements are used to formulate several functions relating these measurements with their attending errors. Minimization of these functions have yielded the required corrective coefficients.

  3. Pauli effects in uncertainty relations

    NASA Astrophysics Data System (ADS)

    Toranzo, I. V.; Sánchez-Moreno, P.; Esquivel, R. O.; Dehesa, J. S.

    2014-10-01

    In this Letter we analyze the effect of the spin dimensionality of a physical system in two mathematical formulations of the uncertainty principle: a generalized Heisenberg uncertainty relation valid for all antisymmetric N-fermion wavefunctions, and the Fisher-information-based uncertainty relation valid for all antisymmetric N-fermion wavefunctions of central potentials. The accuracy of these spin-modified uncertainty relations is examined for all atoms from Hydrogen to Lawrencium in a self-consistent framework.

  4. Dimensionality reduction for uncertainty quantification of nuclear engineering models.

    SciTech Connect

    Roderick, O.; Wang, Z.; Anitescu, M.

    2011-01-01

    The task of uncertainty quantification consists of relating the available information on uncertainties in the model setup to the resulting variation in the outputs of the model. Uncertainty quantification plays an important role in complex simulation models of nuclear engineering, where better understanding of uncertainty results in greater confidence in the model and in the improved safety and efficiency of engineering projects. In our previous work, we have shown that the effect of uncertainty can be approximated by polynomial regression with derivatives (PRD): a hybrid regression method that uses first-order derivatives of the model output as additional fitting conditions for a polynomial expansion. Numerical experiments have demonstrated the advantage of this approach over classical methods of uncertainty analysis: in precision, computational efficiency, or both. To obtain derivatives, we used automatic differentiation (AD) on the simulation code; hand-coded derivatives are acceptable for simpler models. We now present improvements on the method. We use a tuned version of the method of snapshots, a technique based on proper orthogonal decomposition (POD), to set up the reduced order representation of essential information on uncertainty in the model inputs. The automatically obtained sensitivity information is required to set up the method. Dimensionality reduction in combination with PRD allows analysis on a larger dimension of the uncertainty space (>100), at modest computational cost.

  5. Uncertainty Quantification in Climate Modeling and Projection

    SciTech Connect

    Qian, Yun; Jackson, Charles; Giorgi, Filippo; Booth, Ben; Duan, Qingyun; Forest, Chris; Higdon, Dave; Hou, Z. Jason; Huerta, Gabriel

    2016-05-01

    The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change information for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for

  6. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  7. Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review

    NASA Technical Reports Server (NTRS)

    Tripp, John S.

    1999-01-01

    This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.

  8. Uncertainty in prediction of disinfection performance.

    PubMed

    Neumann, Marc B; von Gunten, Urs; Gujer, Willi

    2007-06-01

    Predicting the disinfection performance of a full-scale reactor in drinking water treatment is associated with considerable uncertainty. In view of quantitative risk analysis, this study assesses the uncertainty involved in predicting inactivation of Cryptosporidium parvum oocysts for an ozone reactor treating lake water. A micromodel is suggested which quantifies inactivation by stochastic sampling from density distributions of ozone exposure and lethal ozone dose. The ozone exposure distribution is computed with a tank in series model that is derived from tracer data and measurements of flow, ozone concentration and ozone decay. The distribution of lethal ozone doses is computed with a delayed Chick-Watson model which was calibrated by Sivaganesan and Marinas [2005. Development of a Ct equation taking into consideration the effect of Lot variability on the inactivation of Cryptosporidium parvum oocysts with ozone. Water Res. 39(11), 2429-2437] utilizing a large number of inactivation studies. Parameter uncertainty is propagated with Monte Carlo simulation and the probability of attaining given inactivation levels is assessed. Regional sensitivity analysis based on variance decomposition ranks the influence of parameters in determining the variance of the model result. The lethal dose model turns out to be responsible for over 90% of the output variance. The entire analysis is re-run for three exemplary scenarios to assess the robustness of the results in view of changing inputs, differing operational parameters or revised assumptions about the appropriate model. We argue that the suggested micromodel is a versatile approach for characterization of disinfection reactors. The scheme developed for uncertainty assessment is optimal for model diagnostics and effectively supports the management of uncertainty.

  9. Satellite altitude determination uncertainties

    NASA Technical Reports Server (NTRS)

    Siry, J. W.

    1972-01-01

    Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.

  10. Uncertainty bounds using sector theory

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Schmidt, David K.

    1989-01-01

    An approach based on sector-stability theory can furnish a description of the uncertainty associated with the frequency response of a model, given sector-bounds on the individual parameters of the model. The application of the sector-based approach to the formulation of useful uncertainty descriptions for linear, time-invariant multivariable systems is presently explored, and the approach is applied to two generic forms of parameter uncertainty in order to investigate its advantages and limitations. The results obtained show that sector-uncertainty bounds can be used to evaluate the impact of parameter uncertainties on the frequency response of the design model.

  11. mu analysis with real parametric uncertainty

    NASA Technical Reports Server (NTRS)

    Young, Peter M.; Newlin, Matthew P.; Doyle, John C.

    1991-01-01

    The authors give a broad overview, from a LFT (linear fractional transformation)/mu perspective, of some of the theoretical and practical issues associated with robustness in the presence of real parametric uncertainty, with a focus on computation. Recent results on the properties of mu in the mixed case are reviewed, including issues of NP completeness, continuity, computation of bounds, the equivalence of mu and its bounds, and some direct comparisons with Kharitonov-type analysis methods. In addition, some advances in the computational aspects of the problem, including a novel branch and bound algorithm, are briefly presented together with numerical results. The results suggest that while the mixed mu problem may have inherently combinatoric worst-case behavior, practical algorithms with modest computational requirements can be developed for problems of medium size (less than 100 parameters) that are of engineering interest.

  12. Database uncertainty as a limiting factor in reactive transport prognosis

    NASA Astrophysics Data System (ADS)

    Nitzsche, O.; Meinrath, G.; Merkel, B.

    2000-08-01

    The effect of uncertainties in thermodynamic databases on prediction performances of reactive transport modeling of uranium (VI) is investigated with a Monte Carlo approach using the transport code TReaC. TReaC couples the transport model to the speciation code PHREEQC by a particle tracking method. A speciation example is given to illustrate the effect of uncertainty in thermodynamic data on the predicted solution composition. The transport calculations consequently show the prediction uncertainty resulting from uncertainty in thermodynamic data. A conceptually simple scenario of elution of uranium from a sand column is used as an illustrating example. Two different cases are investigated: a carbonate-enriched drinking water and an acid mine water associated with uranium mine remediation problems. Due to the uncertainty in the relative amount of positively charged and neutral solution species, the uncertainty in the thermodynamic data also infers uncertainty in the retardation behavior. The carbonated water system shows the largest uncertainties in speciation calculation. Therefore, the model predictions of total uranium solubility have a broad range. The effect of data uncertainty in transport prediction is further illustrated by a prediction of the time when eluted uranium from the column exceeds a threshold value. All of these Monte Carlo transport calculations consume large amounts of computing time.

  13. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  14. Quantum uncertainty relation using coherence

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Bai, Ge; Peng, Tianyi; Ma, Xiongfeng

    2017-09-01

    Measurement outcomes of a quantum state can be genuinely random (unpredictable) according to the basic laws of quantum mechanics. The Heisenberg-Robertson uncertainty relation puts constraints on the accuracy of two noncommuting observables. The existing uncertainty relations adopt variance or entropic measures, which are functions of observed outcome distributions, to quantify the uncertainty. According to recent studies of quantum coherence, such uncertainty measures contain both classical (predictable) and quantum (unpredictable) components. In order to extract out the quantum effects, we define quantum uncertainty to be the coherence of the state on the measurement basis. We discover a quantum uncertainty relation using coherence between two measurement noncommuting bases. Furthermore, we analytically derive the quantum uncertainty relation for the qubit case with three widely adopted coherence measures, the relative entropy of coherence, the coherence of formation, and the l1 norm of coherence.

  15. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  16. Teaching Quantum Uncertainty1

    NASA Astrophysics Data System (ADS)

    Hobson, Art

    2011-10-01

    An earlier paper2 introduces quantum physics by means of four experiments: Youngs double-slit interference experiment using (1) a light beam, (2) a low-intensity light beam with time-lapse photography, (3) an electron beam, and (4) a low-intensity electron beam with time-lapse photography. It's ironic that, although these experiments demonstrate most of the quantum fundamentals, conventional pedagogy stresses their difficult and paradoxical nature. These paradoxes (i.e., logical contradictions) vanish, and understanding becomes simpler, if one takes seriously the fact that quantum mechanics is the nonrelativistic limit of our most accurate physical theory, namely quantum field theory, and treats the Schroedinger wave function, as well as the electromagnetic field, as quantized fields.2 Both the Schroedinger field, or "matter field," and the EM field are made of "quanta"—spatially extended but energetically discrete chunks or bundles of energy. Each quantum comes nonlocally from the entire space-filling field and interacts with macroscopic systems such as the viewing screen by collapsing into an atom instantaneously and randomly in accordance with the probability amplitude specified by the field. Thus, uncertainty and nonlocality are inherent in quantum physics. This paper is about quantum uncertainty. A planned later paper will take up quantum nonlocality.

  17. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  18. Antarctic Photochemistry: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.; McConnell, Joseph R.

    1999-01-01

    Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.

  19. Uncertainty in Wildfire Behavior

    NASA Astrophysics Data System (ADS)

    Finney, M.; Cohen, J. D.

    2013-12-01

    The challenge of predicting or modeling fire behavior is well recognized by scientists and managers who attempt predictions of fire spread rate or growth. At the scale of the spreading fire, the uncertainty in winds, moisture, fuel structure, and fire location make accurate predictions difficult, and the non-linear response of fire spread to these conditions means that average behavior is poorly represented by average environmental parameters. Even more difficult are estimations of threshold behaviors (e.g. spread/no-spread, crown fire initiation, ember generation and spotting) because the fire responds as a step-function to small changes in one or more environmental variables, translating to dynamical feedbacks and unpredictability. Recent research shows that ignition of fuel particles, itself a threshold phenomenon, depends on flame contact which is absolutely not steady or uniform. Recent studies of flame structure in both spreading and stationary fires reveals that much of the non-steadiness of the flames as they contact fuel particles results from buoyant instabilities that produce quasi-periodic flame structures. With fuel particle ignition produced by time-varying heating and short-range flame contact, future improvements in fire behavior modeling will likely require statistical approaches to deal with the uncertainty at all scales, including the level of heat transfer, the fuel arrangement, and weather.

  20. Investment, regulation, and uncertainty

    PubMed Central

    Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose

    2014-01-01

    As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases.   This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745

  1. The Neural Representation of Unexpected Uncertainty During Value-Based Decision Making

    PubMed Central

    Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P.

    2016-01-01

    Summary Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled each form of uncertainty to be separately measured. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. PMID:23849203

  2. Phylogenetic uncertainty revisited: Implications for ecological analyses.

    PubMed

    Rangel, Thiago F; Colwell, Robert K; Graves, Gary R; Fučíková, Karolina; Rahbek, Carsten; Diniz-Filho, José Alexandre F

    2015-05-01

    Ecologists and biogeographers usually rely on a single phylogenetic tree to study evolutionary processes that affect macroecological patterns. This approach ignores the fact that each phylogenetic tree is a hypothesis about the evolutionary history of a clade, and cannot be directly observed in nature. Also, trees often leave out many extant species, or include missing species as polytomies because of a lack of information on the relationship among taxa. Still, researchers usually do not quantify the effects of phylogenetic uncertainty in ecological analyses. We propose here a novel analytical strategy to maximize the use of incomplete phylogenetic information, while simultaneously accounting for several sources of phylogenetic uncertainty that may distort statistical inferences about evolutionary processes. We illustrate the approach using a clade-wide analysis of the hummingbirds, evaluating how different sources of uncertainty affect several phylogenetic comparative analyses of trait evolution and biogeographic patterns. Although no statistical approximation can fully substitute for a complete and robust phylogeny, the method we describe and illustrate enables researchers to broaden the number of clades for which studies informed by evolutionary relationships are possible, while allowing the estimation and control of statistical error that arises from phylogenetic uncertainty. Software tools to carry out the necessary computations are offered. © 2015 The Author(s).

  3. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  4. Statistical assessment of predictive modeling uncertainty

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  5. Environmental adversity and uncertainty favour cooperation

    PubMed Central

    Andras, Peter; Lazarus, John; Roberts, Gilbert

    2007-01-01

    Background A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world – with examples from alpine plants, worms, fish, mole-rats, monkeys and humans – for cooperation to flourish where the environment is more adverse (harsher) or more unpredictable. Results Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. Conclusion This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies. PMID:18053138

  6. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  7. Theoretical Analysis of Positional Uncertainty in Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Coskun Kiraci, Ali; Toz, Gonul

    2016-10-01

    GNSS/INS system composed of Global Navigation Satellite System and Inertial Navigation System together can provide orientation parameters directly by the observations collected during the flight. Thus orientation parameters can be obtained by GNSS/INS integration process without any need for aero triangulation after the flight. In general, positional uncertainty can be estimated with known coordinates of Ground Control Points (GCP) which require field works such as marker construction and GNSS measurement leading additional cost to the project. Here the question arises what should be the theoretical uncertainty of point coordinates depending on the uncertainties of orientation parameters. In this study the contribution of each orientation parameter on positional uncertainty is examined and theoretical positional uncertainty is computed without GCP measurement for direct georeferencing using a graphical user interface developed in MATLAB.

  8. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  9. On the worst case uncertainty and its evaluation

    NASA Astrophysics Data System (ADS)

    Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.

    2016-02-01

    The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.

  10. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  11. Pragmatic aspects of uncertainty propagation: A conceptual review

    NASA Astrophysics Data System (ADS)

    Thacker, W. Carlisle; Iskandarani, Mohamed; Gonçalves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar M.

    2015-11-01

    When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and (ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.

  12. Uncertainty relation in Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng

    2015-04-01

    We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 ⁡ c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  13. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  14. DOD ELAP Lab Uncertainties

    DTIC Science & Technology

    2012-03-01

    IPV6, NLLAP, NEFAP  TRAINING  Programs  Certification Bodies – ISO /IEC 17021  Accreditation for  Management System  Certification Bodies that...certify to :  ISO   9001  (QMS),  ISO  14001 (EMS),   TS 16949 (US Automotive)  etc. 2 3 DoD QSM 4.2 standard   ISO /IEC 17025:2005  Each has uncertainty...298 (Rev. 8-98) Prescribed by ANSI Std Z39-18  Laboratories – ISO /IEC 17025  Inspection Bodies – ISO /IEC 17020  RMPs – ISO  Guide 34 (Reference

  15. Uncertainty as Certaint

    NASA Astrophysics Data System (ADS)

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  16. Generalized uncertainty relations

    NASA Astrophysics Data System (ADS)

    Herdegen, Andrzej; Ziobro, Piotr

    2017-04-01

    The standard uncertainty relations (UR) in quantum mechanics are typically used for unbounded operators (like the canonical pair). This implies the need for the control of the domain problems. On the other hand, the use of (possibly bounded) functions of basic observables usually leads to more complex and less readily interpretable relations. In addition, UR may turn trivial for certain states if the commutator of observables is not proportional to a positive operator. In this letter we consider a generalization of standard UR resulting from the use of two, instead of one, vector states. The possibility to link these states to each other in various ways adds additional flexibility to UR, which may compensate some of the above-mentioned drawbacks. We discuss applications of the general scheme, leading not only to technical improvements, but also to interesting new insight.

  17. Medical decisions under uncertainty.

    PubMed

    Carmi, A

    1993-01-01

    The court applies the criteria of the reasonable doctor and common practice in order to consider the behaviour of a defendant physician. The meaning of our demand that the doctor expects that his or her acts or omissions will bring about certain implications is that, according to the present circumstances and subject to the limited knowledge of the common practice, the course of certain events or situations in the future may be assumed in spite of the fog of uncertainty which surrounds us. The miracles and wonders of creation are concealed from us, and we are not aware of the way and the nature of our bodily functioning. Therefore, there seems to be no way to avoid mistakes, because in several cases the correct diagnosis cannot be determined even with the most advanced application of all information available. Doctors find it difficult to admit that they grope in the dark. They wish to form clear and accurate diagnoses for their patients. The fact that their profession is faced with innumerable and unavoidable risks and mistakes is hard to swallow, and many of them claim that in their everyday work this does not happen. They should not content themselves by changing their style. A radical metamorphosis is needed. They should not be tempted to formulate their diagnoses in 'neutral' statements in order to be on the safe side. Uncertainty should be accepted and acknowledged by the profession and by the public at large as a human phenomenon, as an integral part of any human decision, and as a clear characteristic of any legal or medical diagnosis.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Quantum corrections to newtonian potential and generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Scardigli, Fabio; Lambiase, Gaetano; Vagenas, Elias

    2017-08-01

    We use the leading quantum corrections to the newtonian potential to compute the deformation parameter of the generalized uncertainty principle. By assuming just only General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum, our calculation gives, to first order, a specific numerical result. We briefly discuss the physical meaning of this value, and compare it with the previously obtained bounds on the generalized uncertainty principle deformation parameter.

  19. Transport Behavior in Fractured Rock under Conceptual and Parametric Uncertainty

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.

    2016-12-01

    Lack of hydrogeological data and knowledge leads to uncertainty in numerical modeling, and many conceptualizations are often proposed to represent uncertain model components derived from the same data. This study investigates how conceptual and parametric uncertainty influence transport behavior in three-dimensional discrete fracture networks (DFN). dfnWorks, a parallelized computational suite developed at the Los Alamos National Laboratory is used to simulate flow and transport in simple 3D percolating DFNs. Model averaging techniques in a Monte-Carlo framework are adopted to effectively predict contaminant plumes and to quantify prediction uncertainty arising from conceptual and parametric uncertainties. The method is applied to stochastic fracture networks with orthogonal sets of background fractures and domain spanning faults. The sources of uncertainty are the boundary conditions and the fault characteristics. Spatial and temporal analyses of the contaminant plumes are conducted to compute influence of the uncertainty sources on the transport behavior. The flow and transport characteristics of 3D stochastic DFNs under uncertainty help in laying the groundwork for model development and analysis of field scale fractured rock systems.

  20. Uncertainty Quantification for Safety Verification Applications in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Boafo, Emmanuel

    There is an increasing interest in computational reactor safety analysis to systematically replace the conservative calculations by best estimate calculations augmented by quantitative uncertainty analysis methods. This has been necessitated by recent regulatory requirements that have permitted the use of such methods in reactor safety analysis. Stochastic uncertainty quantification methods have shown great promise, as they are better suited to capture the complexities in real engineering problems. This study proposes a framework for performing uncertainty quantification based on the stochastic approach, which can be applied to enhance safety analysis. (Abstract shortened by ProQuest.).

  1. Impact of discharge data uncertainty on nutrient load uncertainty

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars

    2016-04-01

    Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.

  2. Uncertainty quantification in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Rizzi, Francesco

    This dissertation focuses on uncertainty quantification (UQ) in molecular dynamics (MD) simulations. The application of UQ to molecular dynamics is motivated by the broad uncertainty characterizing MD potential functions and by the complexity of the MD setting, where even small uncertainties can be amplified to yield large uncertainties in the model predictions. Two fundamental, distinct sources of uncertainty are investigated in this work, namely parametric uncertainty and intrinsic noise. Intrinsic noise is inherently present in the MD setting, due to fluctuations originating from thermal effects. Averaging methods can be exploited to reduce the fluctuations, but due to finite sampling, this effect cannot be completely filtered, thus yielding a residual uncertainty in the MD predictions. Parametric uncertainty, on the contrary, is introduced in the form of uncertain potential parameters, geometry, and/or boundary conditions. We address the UQ problem in both its main components, namely the forward propagation, which aims at characterizing how uncertainty in model parameters affects selected observables, and the inverse problem, which involves the estimation of target model parameters based on a set of observations. The dissertation highlights the challenges arising when parametric uncertainty and intrinsic noise combine to yield non-deterministic, noisy MD predictions of target macroscale observables. Two key probabilistic UQ methods, namely Polynomial Chaos (PC) expansions and Bayesian inference, are exploited to develop a framework that enables one to isolate the impact of parametric uncertainty on the MD predictions and, at the same time, properly quantify the effect of the intrinsic noise. Systematic applications to a suite of problems of increasing complexity lead to the observation that an uncertain PC representation built via Bayesian regression is the most suitable model for the representation of uncertain MD predictions of target observables in the

  3. Uncertainty and Surprise: An Introduction

    NASA Astrophysics Data System (ADS)

    McDaniel, Reuben R.; Driebe, Dean J.

    Much of the traditional scientific and applied scientific work in the social and natural sciences has been built on the supposition that the unknowability of situations is the result of a lack of information. This has led to an emphasis on uncertainty reduction through ever-increasing information seeking and processing, including better measurement and observational instrumentation. Pending uncertainty reduction through better information, efforts are devoted to uncertainty management and hierarchies of controls. A central goal has been the avoidance of surprise.

  4. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  5. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  6. Uncertainty in Partner Selection for Virtual Enterprises

    NASA Astrophysics Data System (ADS)

    Crispim, José; de Sousa, Jorge Pinho

    A virtual enterprise (VE) is a temporary organization that pools the core competencies of its member enterprises and exploits fast changing market opportunities. The success of such an organization is strongly dependent on its composition, and the selection of partners becomes therefore a crucial issue. This problem is particularly difficult because of the uncertainties related to information, market dynamics, customer expectations and technology speed up. In this paper we propose an integrated approach to rank alternative VE configurations in business environments with uncertainty, using an extension of the TOPSIS method for fuzzy data, improved through the use of a stochastic multiobjective tabu search meta-heuristic. Preliminary computational results clearly demonstrate the potential of this approach for practical application.

  7. Uncertainty quantification for porous media flows

    SciTech Connect

    Christie, Mike . E-mail: mike.christie@pet.hw.ac.uk; Demyanov, Vasily; Erbas, Demet

    2006-09-01

    Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found.

  8. Back to the future: The Grassroots of Hydrological Uncertainty

    NASA Astrophysics Data System (ADS)

    Smith, K. A.

    2013-12-01

    Uncertainties are widespread within hydrological science, and as society is looking to models to provide answers as to how climate change may affect our future water resources, the performance of hydrological models should be evaluated. With uncertainties being introduced from input data, parameterisation, model structure, validation data, and ';unknown unknowns' it is easy to be pessimistic about model outputs. But uncertainties are an opportunity for scientific endeavour, not a threat. Investigation and suitable presentation of uncertainties, which results in a range of potential outcomes, provides more insight into model projections than just one answer. This paper aims to demonstrate the feasibility of conducting computationally demanding parameter uncertainty estimation experiments on global hydrological models (GHMs). Presently, individual GHMs tend to present their one, best projection, but this leads to spurious precision - a false impression of certainty - which can be misleading to decision makers. Whilst uncertainty estimation is firmly established in catchment hydrology, GHM uncertainty, and parameter uncertainty in particular, has remained largely overlooked. Model inter-comparison studies that investigate model structure uncertainty have been undertaken (e.g. ISI-MIP, EU-WATCH etc.), but these studies seem premature when the uncertainties within each individual model itself have not yet been considered. This study takes a few steps back, going down to one of the first introductions of assumptions in model development, the assignment of model parameter values. Making use of the University of Nottingham's High Performance Computer Cluster (HPC), the Mac-PDM.09 GHM has been subjected to rigorous uncertainty experiments. The Generalised Likelihood Uncertainty Estimation method (GLUE) with Latin Hypercube Sampling has been applied to a GHM for the first time, to produce 100,000 simultaneous parameter perturbations. The results of this ensemble of 100

  9. Uncertainty Quantification in Internal Dose Calculations for Seven Selected Radiopharmaceuticals.

    PubMed

    Spielmann, Vladimir; Li, Wei Bo; Zankl, Maria; Oeh, Uwe; Hoeschen, Christoph

    2016-01-01

    Dose coefficients of radiopharmaceuticals have been published by the International Commission on Radiological Protection (ICRP) and the MIRD Committee but without information concerning uncertainties. The uncertainty information of dose coefficients is important, for example, to compare alternative diagnostic methods and choose the method that causes the lowest patient exposure with appropriate and comparable diagnostic quality. For the study presented here, an uncertainty analysis method was developed and used to calculate the uncertainty of the internal doses of 7 common radiopharmaceuticals. On the basis of the generalized schema of dose calculation recommended by the ICRP and MIRD Committee, an analysis based on propagation of uncertainty was developed and applied for 7 radiopharmaceuticals. The method takes into account the uncertainties contributed from pharmacokinetic models and the so-called S values derived from several voxel computational phantoms previously developed at Helmholtz Zentrum München. Random and Latin hypercube sampling techniques were used to sample parameters of pharmacokinetic models and S values, and the uncertainties of absorbed doses and effective doses were calculated. The uncertainty factors (square root of the ratio between 97.5th and 2.5th percentiles) for organ-absorbed doses are in the range of 1.1-3.3. Uncertainty values of effective doses are lower in comparison to absorbed doses, the maximum value being approximately 1.4. The ICRP reference values showed a deviation comparable to the effective dose calculated in this study. A general statistical method was developed for calculating the uncertainty of absorbed doses and effective doses for 7 radiopharmaceuticals. The dose uncertainties can be used to further identify the most important parameters in the dose calculation and provide reliable dose coefficients for risk analysis of the patients in nuclear medicine. © 2016 by the Society of Nuclear Medicine and Molecular Imaging

  10. Higher-order uncertainty relations

    NASA Astrophysics Data System (ADS)

    Wünsche, A.

    2006-07-01

    Using the non-negativity of Gram determinants of arbitrary order, we derive higher-order uncertainty relations for the symmetric uncertainty matrices of corresponding order n?>?2 to n Hermitean operators (n?=?2 is the usual case). The special cases of third-order and fourth-order uncertainty relations are considered in detail. The obtained third-order uncertainty relations are applied to the Lie groups SU(1,1) with three Hermitean basis operators (K1,K2,K0) and SU(2) with three Hermitean basis operators (J1,J2,J3) where, in particular, the group-coherent states of Perelomov type and of Barut Girardello type for SU(1,1) and the spin or atomic coherent states for SU(2) are investigated. The uncertainty relations for the determinant of the third-order uncertainty matrix are satisfied with the equality sign for coherent states and this determinant becomes vanishing for the Perelomov type of coherent states for SU(1,1) and SU(2). As an example of the application of fourth-order uncertainty relations, we consider the canonical operators (Q1,P1,Q2,P2) of two boson modes and the corresponding uncertainty matrix formed by the operators of the corresponding mean deviations, taking into account the correlations between the two modes. In two mathematical appendices, we prove the non-negativity of the determinant of correlation matrices of arbitrary order and clarify the principal structure of higher-order uncertainty relations.

  11. Uncertainty and Anticipation in Anxiety

    PubMed Central

    Grupe, Dan W.; Nitschke, Jack B.

    2014-01-01

    Uncertainty about a possible future threat disrupts our ability to avoid it or to mitigate its negative impact, and thus results in anxiety. Here, we focus the broad literature on the neurobiology of anxiety through the lens of uncertainty. We identify five processes essential for adaptive anticipatory responses to future threat uncertainty, and propose that alterations to the neural instantiation of these processes results in maladaptive responses to uncertainty in pathological anxiety. This framework has the potential to advance the classification, diagnosis, and treatment of clinical anxiety. PMID:23783199

  12. Simplified propagation of standard uncertainties

    SciTech Connect

    Shull, A.H.

    1997-06-09

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards` uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper.

  13. Uncertainty in Agricultural Impact Assessment

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Mearns, Linda O.; Rivington, Michael; Antle, John M.; Ruane, Alexander C.

    2014-01-01

    This chapter considers issues concerning uncertainty associated with modeling and its use within agricultural impact assessments. Information about uncertainty is important for those who develop assessment methods, since that information indicates the need for, and the possibility of, improvement of the methods and databases. Such information also allows one to compare alternative methods. Information about the sources of uncertainties is an aid in prioritizing further work on the impact assessment method. Uncertainty information is also necessary for those who apply assessment methods, e.g., for projecting climate change impacts on agricultural production and for stakeholders who want to use the results as part of a decision-making process (e.g., for adaptation planning). For them, uncertainty information indicates the degree of confidence they can place in the simulated results. Quantification of uncertainty also provides stakeholders with an important guideline for making decisions that are robust across the known uncertainties. Thus, uncertainty information is important for any decision based on impact assessment. Ultimately, we are interested in knowledge about uncertainty so that information can be used to achieve positive outcomes from agricultural modeling and impact assessment.

  14. Strain gauge measurement uncertainties on hydraulic turbine runner blade

    NASA Astrophysics Data System (ADS)

    Arpin-Pont, J.; Gagnon, M.; Tahan, S. A.; Coutu, A.; Thibault, D.

    2012-11-01

    Strains experimentally measured with strain gauges can differ from those evaluated using the Finite Element (FE) method. This difference is due mainly to the assumptions and uncertainties inherent to each method. To circumvent this difficulty, we developed a numerical method based on Monte Carlo simulations to evaluate measurement uncertainties produced by the behaviour of a unidirectional welded gauge, its position uncertainty and its integration effect. This numerical method uses the displacement fields of the studied part evaluated by an FE analysis. The paper presents a study case using in situ data measured on a hydraulic turbine runner. The FE analysis of the turbine runner blade was computed, and our numerical method used to evaluate uncertainties on strains measured at five locations with welded strain gauges. Then, measured strains and their uncertainty ranges are compared to the estimated strains. The uncertainty ranges obtained extended from 74 μepsilon to 165 μepsilon. Furthermore, the biases observed between the median of the uncertainty ranges and the FE strains varied from -36 to 36 μepsilon. Note that strain gauge measurement uncertainties depend mainly on displacement fields and gauge geometry.

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  16. Controllable set analysis for planetary landing under model uncertainties

    NASA Astrophysics Data System (ADS)

    Long, Jiateng; Gao, Ai; Cui, Pingyuan

    2015-07-01

    Controllable set analysis is a beneficial method in planetary landing mission design by feasible entry state selection in order to achieve landing accuracy and satisfy entry path constraints. In view of the severe impact of model uncertainties on planetary landing safety and accuracy, the purpose of this paper is to investigate the controllable set under uncertainties between on-board model and the real situation. Controllable set analysis under model uncertainties is composed of controllable union set (CUS) analysis and controllable intersection set (CIS) analysis. Definitions of CUS and CIS are demonstrated and computational method of them based on Gauss pseudospectral method is presented. Their applications on entry states distribution analysis under uncertainties and robustness of nominal entry state selection to uncertainties are illustrated by situations with ballistic coefficient, lift-to-drag ratio and atmospheric uncertainty in Mars entry. With analysis of CUS and CIS, the robustness of entry state selection and entry trajectory to model uncertainties can be guaranteed, thus enhancing the safety, reliability and accuracy under model uncertainties during planetary entry and landing.

  17. Spatial uncertainty model for visual features using a Kinect™ sensor.

    PubMed

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  18. Quantitative Assessment of Parametric Uncertainty in Northern Hemisphere PAH Concentrations.

    PubMed

    Thackray, Colin P; Friedman, Carey L; Zhang, Yanxu; Selin, Noelle E

    2015-08-04

    We quantitatively examine the relative importance of uncertainty in emissions and physicochemical properties (including reaction rate constants) to Northern Hemisphere (NH) and Arctic polycyclic aromatic hydrocarbon (PAH) concentrations, using a computationally efficient numerical uncertainty technique applied to the global-scale chemical transport model GEOS-Chem. Using polynomial chaos (PC) methods, we propagate uncertainties in physicochemical properties and emissions for the PAHs benzo[a]pyrene, pyrene and phenanthrene to simulated spatially resolved concentration uncertainties. We find that the leading contributors to parametric uncertainty in simulated concentrations are the black carbon-air partition coefficient and oxidation rate constant for benzo[a]pyrene, and the oxidation rate constants for phenanthrene and pyrene. NH geometric average concentrations are more sensitive to uncertainty in the atmospheric lifetime than to emissions rate. We use the PC expansions and measurement data to constrain parameter uncertainty distributions to observations. This narrows a priori parameter uncertainty distributions for phenanthrene and pyrene, and leads to higher values for OH oxidation rate constants and lower values for European PHE emission rates.

  19. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  20. Are models, uncertainty, and dispute resolution compatible?

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Wilson, J. L.

    2013-12-01

    Models and their uncertainty often move from an objective use in planning and decision making into the regulatory environment, then sometimes on to dispute resolution through litigation or other legal forums. Through this last transition whatever objectivity the models and uncertainty assessment may have once possessed becomes biased (or more biased) as each party chooses to exaggerate either the goodness of a model, or its worthlessness, depending on which view is in its best interest. If worthlessness is desired, then what was uncertain becomes unknown, or even unknowable. If goodness is desired, then precision and accuracy are often exaggerated and uncertainty, if it is explicitly recognized, encompasses only some parameters or conceptual issues, ignores others, and may minimize the uncertainty that it accounts for. In dispute resolution, how well is the adversarial process able to deal with these biases? The challenge is that they are often cloaked in computer graphics and animations that appear to lend realism to what could be mostly fancy, or even a manufactured outcome. While junk science can be challenged through appropriate motions in federal court, and in most state courts, it not unusual for biased or even incorrect modeling results, or conclusions based on incorrect results, to be permitted to be presented at trial. Courts allow opinions that are based on a "reasonable degree of scientific certainty," but when that 'certainty' is grossly exaggerated by an expert, one way or the other, how well do the courts determine that someone has stepped over the line? Trials are based on the adversary system of justice, so opposing and often irreconcilable views are commonly allowed, leaving it to the judge or jury to sort out the truth. Can advances in scientific theory and engineering practice, related to both modeling and uncertainty, help address this situation and better ensure that juries and judges see more objective modeling results, or at least see

  1. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  2. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  3. Precaution, uncertainty and causation in environmental decisions.

    PubMed

    Ricci, Paolo F; Rice, Dave; Ziagos, John; Cox, Louis A

    2003-04-01

    What measures of uncertainty and what causal analysis can improve the management of potentially severe, irreversible or dreaded environmental outcomes? Environmental choices show that policies intended to be precautionary (such as adding MTBE to petrol) can cause unanticipated harm (by mobilizing benzene, a known leukemogen, in the ground water). Many environmental law principles set the boundaries of what should be done but do not provide an operational construct to answer this question. Those principles, ranging from the precautionary principle to protecting human health from a significant risk of material health impairment, do not explain how to make environmental management choices when incomplete, inconsistent and complex scientific evidence characterizes potentially adverse environmental outcomes. Rather, they pass the task to lower jurisdictions such as agencies or authorities. To achieve the goals of the principle, those who draft it must deal with scientific casual conjectures, partial knowledge and variable data. In this paper we specifically deal with the qualitative and quantitative aspects of the European Union's (EU) explanation of consistency and on the examination of scientific developments relevant to variability and uncertain data and causation. Managing hazards under the precautionary principle requires inductive, empirical methods of assessment. However, acting on a scientific conjecture can also be socially unfair, costly, and detrimental when applied to complex environmental choices. We describe a constructive framework rationally to meet the command of the precautionary principle using alternative measures of uncertainty and recent statistical methods of causal analysis. These measures and methods can bridge the gap between conjectured future irreversible or severe harm and scant scientific evidence, thus leading to more confident and resilient social choices. We review two sets of measures and computational systems to deal with uncertainty

  4. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  5. Uncertainty vs. Information (Invited)

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2017-04-01

    Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.

  6. Uncertainties in Supernova Yields

    NASA Astrophysics Data System (ADS)

    Young, Patrick A.; Fryer, C. L.

    2006-12-01

    Theoretical nucleosynthetic yields from supernovae are sensitive to both the details of the progenitor star and the explosion calculation. We attempt to comprehensively identify the sources of uncertainties in these yields. In this poster we concentrate on the variations in yields from a single progenitor arising from common 1-dimensional methods of approximating a supernova explosion. 3-dimensional effects in the explosion and the progenitor and improved physics in the progenitor evolution are also given preliminary consideration. For the 1-dimensional explosions we find that both elemental and isotopic yields for Si and heavier elements are a sensitive function of explosion energy. Also, piston-driven and thermal bomb type explosions have different yields for the same explosion energy. Yields derived from 1-dimensional explosions are non-unique. Bulk yields of common elements can vary by factors of several depending upon the assumptions of the calculation. This work was carried out in part under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory and supported by Contract No. DE-AC52-06NA25396, by a DOE SciDAC grant DE-FC02-01ER41176, an NNSA ASC grant, and a subcontract to the ASCI FLASH Center at the University of Chicago.

  7. Pandemic influenza: certain uncertainties

    PubMed Central

    Morens, David M.; Taubenberger, Jeffery K.

    2011-01-01

    SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672

  8. Uncertainties in Arctic Precipitation

    NASA Astrophysics Data System (ADS)

    Majhi, I.; Alexeev, V. A.; Cherry, J. E.; Cohen, J. L.; Groisman, P. Y.

    2012-12-01

    Arctic precipitation is riddled with measurement biases; to address the problem is imperative. Our study focuses on comparison of various datasets and analyzing their biases for the region of Siberia and caution that is needed when using them. Five sources of data were used ranging from NOAA's product (RAW, Bogdanova's correction), Yang's correction technique and two reanalysis products (ERA-Interim and NCEP). The reanalysis dataset performed better for some months in comparison to Yang's product, which tends to overestimate precipitation, and the raw dataset, which tends to underestimate. The sources of bias vary from topography, to wind, to missing data .The final three products chosen show higher biases during the winter and spring season. Emphasis on equations which incorporate blizzards, blowing snow and higher wind speed is necessary for regions which are influenced by any or all of these factors; Bogdanova's correction technique is the most robust of all the datasets analyzed and gives the most reasonable results. One of our future goals is to analyze the impact of precipitation uncertainties on water budget analysis for the Siberian Rivers.

  9. FRAM's Isotopic Uncertainty Analysis.

    SciTech Connect

    Vo, Duc T.

    2005-01-01

    The Fixed-Energy Response-Function Analysis with Multiple Efficiency (FRAM) code was developed at Los Alamos National Laboratory to measure the gamma-ray spectrometry of the isotopic composition of plutonium, uranium, and other actinides. They have studied and identified two different kinds of errors from FRAM analysis: random and systematic. The random errors come mainly from statistics and are easily determined. The systematic errors can come from a variety of sources and can be very difficult to determine. The authors carefully examined the FRAM analytical results of the archival plutonium data and of the data specifically acquired for this isotopic uncertainty analysis project, and found the relationship between the systematic errors and other parameters. They determined that the FRAM's systematic errors could be expressed as functions of the peak resolution and shape, region of analysis, and burnup (for plutonium) or enrichment (for uranium). All other parameters such as weight, matrix material, shape, size, container, electronics, detector, input rate, etc., contribute little to the systematic error or they contribute to the peak resolution and shape and then their contributions can be determined from the peak resolution and shape.

  10. Probabilistic Uncertainty of Parameters and Conceptual Models in Geophysical Inversion

    NASA Astrophysics Data System (ADS)

    Sambridge, M.; Hawkins, R.; Dettmer, J.

    2016-12-01

    Stochastic uncertainty in parameters estimated from geophysical observations has a long history. In the situation where the data model relationship is linear or may be linearized, and data noise can be characterized, then in principle the uncertainty can be estimated in a straightforward manner. In the optimistic case where data noise can be assumed to follow Gaussian errors with known variances and co-variances then much favoured matrix expressions are available that quantify stochastic model uncertainty for linear problems. As the number of data or unknowns increase, nonlinearity and/or non-uniqueness can become severe, or knowledge of data errors itself becomes uncertain, then there are significant practical challenges in the computation and interpretation of uncertainty. These challenges are well known and much effort has recently been devoted to finding efficient ways to quantify uncertainty for such cases. A major aspect of uncertainty that is often acknowledged but seldom addressed is conceptual uncertainty in the inversion process itself. By this we mean assumptions about the physics, chemistry or geology captured in the forward problem, assumptions about the level or type of data noise, and assumptions about the appropriate complexity and form of the model parameterization. Conceptual assumptions are made in building the inference framework in the first place and conceptual uncertainty can have a significant influence on and feedback with uncertainty quantification. This area is receiving increasing attention in the geosciences utilizing techniques from the field of computational Bayesian statistics, where they are referred to as model selection. This presentation will summarize recent, and not so recent, developments in this field, and point to some promising directions.

  11. Hydrology, society, change and uncertainty

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, Demetris

    2014-05-01

    Heraclitus, who predicated that "panta rhei", also proclaimed that "time is a child playing, throwing dice". Indeed, change and uncertainty are tightly connected. The type of change that can be predicted with accuracy is usually trivial. Also, decision making under certainty is mostly trivial. The current acceleration of change, due to unprecedented human achievements in technology, inevitably results in increased uncertainty. In turn, the increased uncertainty makes the society apprehensive about the future, insecure and credulous to a developing future-telling industry. Several scientific disciplines, including hydrology, tend to become part of this industry. The social demand for certainties, no matter if these are delusional, is combined by a misconception in the scientific community confusing science with uncertainty elimination. However, recognizing that uncertainty is inevitable and tightly connected with change will help to appreciate the positive sides of both. Hence, uncertainty becomes an important object to study, understand and model. Decision making under uncertainty, developing adaptability and resilience for an uncertain future, and using technology and engineering means for planned change to control the environment are important and feasible tasks, all of which will benefit from advancements in the Hydrology of Uncertainty.

  12. Housing Uncertainty and Childhood Impatience

    ERIC Educational Resources Information Center

    Anil, Bulent; Jordan, Jeffrey L.; Zahirovic-Herbert, Velma

    2011-01-01

    The study demonstrates a direct link between housing uncertainty and children's time preferences, or patience. We show that students who face housing uncertainties through mortgage foreclosures and eviction learn impatient behavior and are therefore at greater risk of making poor intertemporal choices such as dropping out of school. We find that…

  13. Mama Software Features: Uncertainty Testing

    SciTech Connect

    Ruggiero, Christy E.; Porter, Reid B.

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  14. Uncertainties in nuclear fission data

    NASA Astrophysics Data System (ADS)

    Talou, Patrick; Kawano, Toshihiko; Chadwick, Mark B.; Neudecker, Denise; Rising, Michael E.

    2015-03-01

    We review the current status of our knowledge of nuclear fission data, and quantify uncertainties related to each fission observable whenever possible. We also discuss the roles that theory and experiment play in reducing those uncertainties, contributing to the improvement of our fundamental understanding of the nuclear fission process as well as of evaluated nuclear data libraries used in nuclear applications.

  15. Research strategies for addressing uncertainties

    USGS Publications Warehouse

    Busch, David E.; Brekke, Levi D.; Averyt, Kristen; Jardine, Angela; Welling, Leigh; Garfin, Gregg; Jardine, Angela; Merideth, Robert; Black, Mary; LeRoy, Sarah

    2013-01-01

    Research Strategies for Addressing Uncertainties builds on descriptions of research needs presented elsewhere in the book; describes current research efforts and the challenges and opportunities to reduce the uncertainties of climate change; explores ways to improve the understanding of changes in climate and hydrology; and emphasizes the use of research to inform decision making.

  16. Uncertainty in Integrated Assessment Scenarios

    SciTech Connect

    Mort Webster

    2005-10-17

    The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean

  17. UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I

    SciTech Connect

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-06-10

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].

  18. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  19. Inverse covariance simplification for efficient uncertainty management

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Gutiérrez, J. A.

    2007-11-01

    When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it is crucial to handle them in such a way that their interpretation, or their use in subsequent processing steps, remain user-friendly and computationally tractable. A few authors follow a Bayesian approach and provide uncertainties as an inverse covariance matrix. Despite its apparent sparsity, this matrix contains many small terms that carry little information. Methods have been developed to select the most significant entries, through the use of information-theoretic tools for instance. One has to find a Gaussian pdf that is close enough to the posterior pdf, and with a small number of non-zero coefficients in the inverse covariance matrix. We propose to restrict the search space to Markovian models (where only neighbors can interact), well-suited to signals or images. The originality of our approach is in conserving the covariances between neighbors while setting to zero the entries of the inverse covariance matrix for all other variables. This fully constrains the solution, and the computation is performed via a fast, alternate minimization scheme involving quadratic forms. The Markovian structure advantageously reduces the complexity of Bayesian updating (where the simplified pdf is used as a prior). Moreover, uncertainties exhibit the same temporal or spatial structure as the data.

  20. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-03

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  1. Equivalence theorem of uncertainty relations

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2017-01-01

    We present an equivalence theorem to unify the two classes of uncertainty relations, i.e. the variance-based ones and the entropic forms, showing that the entropy of an operator in a quantum system can be built from the variances of a set of commutative operators. This means that an uncertainty relation in the language of entropy may be mapped onto a variance-based one, and vice versa. Employing the equivalence theorem, alternative formulations of entropic uncertainty relations are obtained for the qubit system that are stronger than the existing ones in the literature, and variance-based uncertainty relations for spin systems are reached from the corresponding entropic uncertainty relations.

  2. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  3. Aspects of modeling uncertainty and prediction

    SciTech Connect

    McKay, M.D.

    1993-12-31

    Probabilistic assessment of variability in model prediction considers input uncertainty and structural uncertainty. For input uncertainty, understanding of practical origins of probabilistic treatments as well as restrictions and limitations of methodology is much more developed than for structural uncertainty. There is a simple basis for structural uncertainty that parallels that for input uncertainty. Although methodologies for assessing structural uncertainty for models in general are very limited, more options are available for submodels.

  4. Understanding and reducing statistical uncertainties in nebular abundance determinations

    NASA Astrophysics Data System (ADS)

    Wesson, R.; Stock, D. J.; Scicluna, P.

    2012-06-01

    Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.

  5. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  6. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    SciTech Connect

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-06-28

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods.

  7. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    SciTech Connect

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.

  8. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions

  9. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  10. Design of forging process variables under uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2005-02-01

    Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.

  11. Improvement of Statistical Decisions under Parametric Uncertainty

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Rozevskis, Uldis

    2011-10-01

    A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Decision-making under uncertainty is a central problem in statistical inference, and has been formally studied in virtually all approaches to inference. The aim of the present paper is to show how the invariant embedding technique, the idea of which belongs to the authors, may be employed in the particular case of finding the improved statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant decision rule, which has smaller risk than any of the well-known decision rules. To illustrate the proposed technique, application examples are given.

  12. Fluid flow dynamics under location uncertainty

    NASA Astrophysics Data System (ADS)

    Mémin, Etienne

    2014-03-01

    We present a derivation of a stochastic model of Navier Stokes equations that relies on a decomposition of the velocity fields into a differentiable drift component and a time uncorrelated uncertainty random term. This type of decomposition is reminiscent in spirit to the classical Reynolds decomposition. However, the random velocity fluctuations considered here are not differentiable with respect to time, and they must be handled through stochastic calculus. The dynamics associated with the differentiable drift component is derived from a stochastic version of the Reynolds transport theorem. It includes in its general form an uncertainty dependent "subgrid" bulk formula that cannot be immediately related to the usual Boussinesq eddy viscosity assumption constructed from thermal molecular agitation analogy. This formulation, emerging from uncertainties on the fluid parcels location, explains with another viewpoint some subgrid eddy diffusion models currently used in computational fluid dynamics or in geophysical sciences and paves the way for new large-scales flow modelling. We finally describe an applications of our formalism to the derivation of stochastic versions of the Shallow water equations or to the definition of reduced order dynamical systems.

  13. Uncertainties of Mayak urine data

    SciTech Connect

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3 to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.

  14. Shock Layer Radiation Modeling and Uncertainty for Mars Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth

    2012-01-01

    A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the

  15. Dynamic optimization of biological networks under parametric uncertainty.

    PubMed

    Nimmegeers, Philippe; Telen, Dries; Logist, Filip; Impe, Jan Van

    2016-08-31

    Micro-organisms play an important role in various industrial sectors (including biochemical, food and pharmaceutical industries). A profound insight in the biochemical reactions inside micro-organisms enables an improved biochemical process control. Biological networks are an important tool in systems biology for incorporating microscopic level knowledge. Biochemical processes are typically dynamic and the cells have often more than one objective which are typically conflicting, e.g., minimizing the energy consumption while maximizing the production of a specific metabolite. Therefore multi-objective optimization is needed to compute trade-offs between those conflicting objectives. In model-based optimization, one of the inherent problems is the presence of uncertainty. In biological processes, this uncertainty can be present due to, e.g., inherent biological variability. Not taking this uncertainty into account, possibly leads to the violation of constraints and erroneous estimates of the actual objective function(s). To account for the variance in model predictions and compute a prediction interval, this uncertainty should be taken into account during process optimization. This leads to a challenging optimization problem under uncertainty, which requires a robustified solution. Three techniques for uncertainty propagation: linearization, sigma points and polynomial chaos expansion, are compared for the dynamic optimization of biological networks under parametric uncertainty. These approaches are compared in two case studies: (i) a three-step linear pathway model in which the accumulation of intermediate metabolites has to be minimized and (ii) a glycolysis inspired network model in which a multi-objective optimization problem is considered, being the minimization of the enzymatic cost and the minimization of the end time before reaching a minimum extracellular metabolite concentration. A Monte Carlo simulation procedure has been applied for the assessment of the

  16. Measurement uncertainty of lactase-containing tablets analyzed with FTIR.

    PubMed

    Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia

    2014-01-01

    Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve <5.0% expanded uncertainty level. Uncertainty arising from the FTIR instrument was <1.0%. The uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.

  17. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    SciTech Connect

    Khuwaileh, B.A. Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  18. Uncertainty in Regional Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Digar, Antara

    Effective pollution mitigation is the key to successful air quality management. Although states invest millions of dollars to predict future air quality, the regulatory modeling and analysis process to inform pollution control strategy remains uncertain. Traditionally deterministic ‘bright-line’ tests are applied to evaluate the sufficiency of a control strategy to attain an air quality standard. A critical part of regulatory attainment demonstration is the prediction of future pollutant levels using photochemical air quality models. However, because models are uncertain, they yield a false sense of precision that pollutant response to emission controls is perfectly known and may eventually mislead the selection of control policies. These uncertainties in turn affect the health impact assessment of air pollution control strategies. This thesis explores beyond the conventional practice of deterministic attainment demonstration and presents novel approaches to yield probabilistic representations of pollutant response to emission controls by accounting for uncertainties in regional air quality planning. Computationally-efficient methods are developed and validated to characterize uncertainty in the prediction of secondary pollutant (ozone and particulate matter) sensitivities to precursor emissions in the presence of uncertainties in model assumptions and input parameters. We also introduce impact factors that enable identification of model inputs and scenarios that strongly influence pollutant concentrations and sensitivity to precursor emissions. We demonstrate how these probabilistic approaches could be applied to determine the likelihood that any control measure will yield regulatory attainment, or could be extended to evaluate probabilistic health benefits of emission controls, considering uncertainties in both air quality models and epidemiological concentration-response relationships. Finally, ground-level observations for pollutant (ozone) and precursor

  19. The uncertainty of counting at a defined solid angle

    NASA Astrophysics Data System (ADS)

    Pommé, S.

    2015-06-01

    Specific uncertainty components of counting at a defined solid angle are discussed. It is potentially an extremely accurate technique for primary standardisation of activity of alpha emitters and low-energy x-ray emitters. Owing to its reproducibility, it is very well suited for half-life measurements. Considered sources of uncertainty are 1) source-detector geometry, 2) solid-angle calculation, 3) energy loss and self-absorption, 4) scattering, 5) detection efficiency. Other sources of uncertainty, such as source weighing, counting, dead time and decay data are common to other standardisation methods. Statistical uncertainty propagation formulas are presented for the solid angle subtended by a circular detector to radioactive sources. Computer simulations were performed to investigate aspects of particle scattering.

  20. Collaborative framework for PIV uncertainty quantification: comparative assessment of methods

    NASA Astrophysics Data System (ADS)

    Sciacchitano, Andrea; Neal, Douglas R.; Smith, Barton L.; Warner, Scott O.; Vlachos, Pavlos P.; Wieneke, Bernhard; Scarano, Fulvio

    2015-07-01

    A posteriori uncertainty quantification of particle image velocimetry (PIV) data is essential to obtain accurate estimates of the uncertainty associated with a given experiment. This is particularly relevant when measurements are used to validate computational models or in design and decision processes. In spite of the importance of the subject, the first PIV uncertainty quantification (PIV-UQ) methods have been developed only in the last three years. The present work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method (Timmins et al 2012), the particle disparity approach (Sciacchitano et al 2013), the peak ratio criterion (Charonko and Vlachos 2013) and the correlation statistics method (Wieneke 2015). The analysis is based upon experiments conducted for this specific purpose, where several measurement techniques are employed simultaneously. The performances of the above approaches are surveyed across different measurement conditions and flow regimes.

  1. Optimal uncertainty relations in a modified Heisenberg algebra

    NASA Astrophysics Data System (ADS)

    Abdelkhalek, Kais; Chemissany, Wissam; Fiedler, Leander; Mangano, Gianpiero; Schwonnek, René

    2016-12-01

    Various theories that aim at unifying gravity with quantum mechanics suggest modifications of the Heisenberg algebra for position and momentum. From the perspective of quantum mechanics, such modifications lead to new uncertainty relations that are thought (but not proven) to imply the existence of a minimal observable length. Here we prove this statement in a framework of sufficient physical and structural assumptions. Moreover, we present a general method that allows us to formulate optimal and state-independent variance-based uncertainty relations. In addition, instead of variances, we make use of entropies as a measure of uncertainty and provide uncertainty relations in terms of min and Shannon entropies. We compute the corresponding entropic minimal lengths and find that the minimal length in terms of min entropy is exactly 1 bit.

  2. Analysis and Reduction of Complex Networks Under Uncertainty

    SciTech Connect

    Knio, Omar M

    2014-04-09

    This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.

  3. On the Morphology of Uncertainty in Human Perception and Cognition

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Solo, Ashu M. G.

    Human cognitive and perception processes have a great tolerance for imprecision or uncertainty. For this reason, the notions of perception and cognition have great importance in solving many decision making problems in engineering, medicine, science, and social science as there are innumerable uncertainties in real-world phenomena. These uncertainties can be broadly classified as either uncertainties arising from the random behavior of physical processes or uncertainties arising from human perception and cognition processes. Statistical theory can be used to model the former, but lacks the sophistication to process the latter. The theory of fuzzy logic has proven to be very effective in processing the latter. The methodology of computing with words and the computational theory of perceptions are branches of fuzzy logic that deal with the manipulation of words that act as labels for perceptions expressed in natural language propositions. New computing methods based on fuzzy logic can lead to greater adaptability, tractability, robustness, a lower cost solution, and better rapport with reality in the development of intelligent systems.

  4. Uncertainty relations for characteristic functions

    NASA Astrophysics Data System (ADS)

    Rudnicki, Łukasz; Tasca, D. S.; Walborn, S. P.

    2016-02-01

    We present the uncertainty relation for the characteristic functions (ChUR) of the quantum mechanical position and momentum probability distributions. This inequality is more general than the Heisenberg uncertainty relation and is saturated in two extreme cases for wave functions described by periodic Dirac combs. We further discuss a broad spectrum of applications of the ChUR; in particular, we constrain quantum optical measurements involving general detection apertures and provide the uncertainty relation that is relevant for loop quantum cosmology. A method to measure the characteristic function directly using an auxiliary qubit is also briefly discussed.

  5. Assessing uncertainty in physical constants

    NASA Astrophysics Data System (ADS)

    Henrion, Max; Fischhoff, Baruch

    1986-09-01

    Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.

  6. Realising the Uncertainty Enabled Model Web

    NASA Astrophysics Data System (ADS)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    conversion between uncertainty types, and between the spatial / temporal support of service inputs / outputs. Finally we describe the tools being generated within the UncertWeb project, considering three main aspects: i) Elicitation of uncertainties on model inputs. We are developing tools to enable domain experts to provide judgements about input uncertainties from UncertWeb model components (e.g. parameters in meteorological models) which allow panels of experts to engage in the process and reach a consensus view on the current knowledge / beliefs about that parameter or variable. We are developing systems for continuous and categorical variables as well as stationary spatial fields. ii) Visualisation of the resulting uncertain outputs from the end of the workflow, but also at intermediate steps. At this point we have prototype implementations driven by the requirements from the use cases that motivate UncertWeb. iii) Sensitivity and uncertainty analysis on model outputs. Here we show the design of the overall system we are developing, including the deployment of an emulator framework to allow computationally efficient approaches. We conclude with a summary of the open issues and remaining challenges we are facing in UncertWeb, and provide a brief overview of how we plan to tackle these.

  7. Uncertainty quantification of squeal instability via surrogate modelling

    NASA Astrophysics Data System (ADS)

    Nobari, Amir; Ouyang, Huajiang; Bannister, Paul

    2015-08-01

    One of the major issues that car manufacturers are facing is the noise and vibration of brake systems. Of the different sorts of noise and vibration, which a brake system may generate, squeal as an irritating high-frequency noise costs the manufacturers significantly. Despite considerable research that has been conducted on brake squeal, the root cause of squeal is still not fully understood. The most common assumption, however, is mode-coupling. Complex eigenvalue analysis is the most widely used approach to the analysis of brake squeal problems. One of the major drawbacks of this technique, nevertheless, is that the effects of variability and uncertainty are not included in the results. Apparently, uncertainty and variability are two inseparable parts of any brake system. Uncertainty is mainly caused by friction, contact, wear and thermal effects while variability mostly stems from the manufacturing process, material properties and component geometries. Evaluating the effects of uncertainty and variability in the complex eigenvalue analysis improves the predictability of noise propensity and helps produce a more robust design. The biggest hurdle in the uncertainty analysis of brake systems is the computational cost and time. Most uncertainty analysis techniques rely on the results of many deterministic analyses. A full finite element model of a brake system typically consists of millions of degrees-of-freedom and many load cases. Running time of such models is so long that automotive industry is reluctant to do many deterministic analyses. This paper, instead, proposes an efficient method of uncertainty propagation via surrogate modelling. A surrogate model of a brake system is constructed in order to reproduce the outputs of the large-scale finite element model and overcome the issue of computational workloads. The probability distribution of the real part of an unstable mode can then be obtained by using the surrogate model with a massive saving of

  8. Non-scalar uncertainty: Uncertainty in dynamic systems

    NASA Technical Reports Server (NTRS)

    Martinez, Salvador Gutierrez

    1992-01-01

    The following point is stated throughout the paper: dynamic systems are usually subject to uncertainty, be it the unavoidable quantic uncertainty when working with sufficiently small scales or when working in large scales uncertainty can be allowed by the researcher in order to simplify the problem, or it can be introduced by nonlinear interactions. Even though non-quantic uncertainty can generally be dealt with by using the ordinary probability formalisms, it can also be studied with the proposed non-scalar formalism. Thus, non-scalar uncertainty is a more general theoretical framework giving insight into the nature of uncertainty and providing a practical tool in those cases in which scalar uncertainty is not enough, such as when studying highly nonlinear dynamic systems. This paper's specific contribution is the general concept of non-scalar uncertainty and a first proposal for a methodology. Applications should be based upon this methodology. The advantage of this approach is to provide simpler mathematical models for prediction of the system states. Present conventional tools for dealing with uncertainty prove insufficient for an effective description of some dynamic systems. The main limitations are overcome abandoning ordinary scalar algebra in the real interval (0, 1) in favor of a tensor field with a much richer structure and generality. This approach gives insight into the interpretation of Quantum Mechanics and will have its most profound consequences in the fields of elementary particle physics and nonlinear dynamic systems. Concepts like 'interfering alternatives' and 'discrete states' have an elegant explanation in this framework in terms of properties of dynamic systems such as strange attractors and chaos. The tensor formalism proves especially useful to describe the mechanics of representing dynamic systems with models that are closer to reality and have relatively much simpler solutions. It was found to be wise to get an approximate solution to an

  9. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  10. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2016-10-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common. The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints. We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  11. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2015-08-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common.The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints.We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  12. Acoustic field and array response uncertainties in stratified ocean media.

    PubMed

    Hayward, Thomas J; Dhakal, Sagar

    2012-07-01

    The change-of-variables theorem of probability theory is applied to compute acoustic field and array beam power probability density functions (pdfs) in uncertain ocean environments represented by stratified, attenuating ocean waveguide models. Computational studies for one and two-layer waveguides investigate the functional properties of the acoustic field and array beam power pdfs. For the studies, the acoustic parameter uncertainties are represented by parametric pdfs. The field and beam response pdfs are computed directly from the parameter pdfs using the normal-mode representation and the change-of-variables theorem. For two-dimensional acoustic parameter uncertainties of sound speed and attenuation, the field and beam power pdfs exhibit irregular functional behavior and singularities associated with stationary points of the mapping, defined by acoustic propagation, from the parameter space to the field or beam power space. Implications for the assessment of orthogonal polynomial expansion and other methods for computing acoustic field pdfs are discussed.

  13. Uncertainty quantification in reacting flow modeling.

    SciTech Connect

    Le MaÒitre, Olivier P.; Reagan, Matthew T.; Knio, Omar M.; Ghanem, Roger Georges; Najm, Habib N.

    2003-10-01

    Uncertainty quantification (UQ) in the computational modeling of physical systems is important for scientific investigation, engineering design, and model validation. In this work we develop techniques for UQ based on spectral and pseudo-spectral polynomial chaos (PC) expansions, and we apply these constructions in computations of reacting flow. We develop and compare both intrusive and non-intrusive spectral PC techniques. In the intrusive construction, the deterministic model equations are reformulated using Galerkin projection into a set of equations for the time evolution of the field variable PC expansion mode strengths. The mode strengths relate specific parametric uncertainties to their effects on model outputs. The non-intrusive construction uses sampling of many realizations of the original deterministic model, and projects the resulting statistics onto the PC modes, arriving at the PC expansions of the model outputs. We investigate and discuss the strengths and weaknesses of each approach, and identify their utility under different conditions. We also outline areas where ongoing and future research are needed to address challenges with both approaches.

  14. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  15. Thermodynamic and relativistic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Artamonov, A. A.; Plotnikov, E. M.

    2017-01-01

    Thermodynamic uncertainty relation (UR) was verified experimentally. The experiments have shown the validity of the quantum analogue of the zeroth law of stochastic thermodynamics in the form of the saturated Schrödinger UR. We have also proposed a new type of UR for the relativistic mechanics. These relations allow us to consider macroscopic phenomena within the limits of the ratio of the uncertainty relations for different physical quantities.

  16. Dynamical Realism and Uncertainty Propagation

    NASA Astrophysics Data System (ADS)

    Park, Inkwan

    In recent years, Space Situational Awareness (SSA) has become increasingly important as the number of tracked Resident Space Objects (RSOs) continues their growth. One of the most significant technical discussions in SSA is how to propagate state uncertainty in a consistent way with the highly nonlinear dynamical environment. In order to keep pace with this situation, various methods have been proposed to propagate uncertainty accurately by capturing the nonlinearity of the dynamical system. We notice that all of the methods commonly focus on a way to describe the dynamical system as precisely as possible based on a mathematical perspective. This study proposes a new perspective based on understanding dynamics of the evolution of uncertainty itself. We expect that profound insights of the dynamical system could present the possibility to develop a new method for accurate uncertainty propagation. These approaches are naturally concluded in goals of the study. At first, we investigate the most dominant factors in the evolution of uncertainty to realize the dynamical system more rigorously. Second, we aim at developing the new method based on the first investigation enabling orbit uncertainty propagation efficiently while maintaining accuracy. We eliminate the short-period variations from the dynamical system, called a simplified dynamical system (SDS), to investigate the most dominant factors. In order to achieve this goal, the Lie transformation method is introduced since this transformation can define the solutions for each variation separately. From the first investigation, we conclude that the secular variations, including the long-period variations, are dominant for the propagation of uncertainty, i.e., short-period variations are negligible. Then, we develop the new method by combining the SDS and the higher-order nonlinear expansion method, called state transition tensors (STTs). The new method retains advantages of the SDS and the STTs and propagates

  17. Uncertainty Analysis Principles and Methods

    DTIC Science & Technology

    2007-09-01

    WARFARE CENTER WEAPONS DIVISION, CHINA LAKE NAVAL AIR WARFARE CENTER AIRCRAFT DIVISION, PATUXENT RIVER NAVAL UNDERSEA WARFARE CENTER DIVISION...total systematic uncertainties be combined in RSS. In many instances, the student’s t-statistic, t95, is set equal to 2 and URSS is replaced by U95...GUM, the total uncertainty UADD, URSS or U95, was offered as type of confi- dence limit. 9595 UxvaluetrueUx +≤≤− In some respects, these limits

  18. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  19. Wildfire Decision Making Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Thompson, M.

    2013-12-01

    Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.

  20. Uncertainty in measurements by counting

    NASA Astrophysics Data System (ADS)

    Bich, Walter; Pennecchi, Francesca

    2012-02-01

    Counting is at the base of many high-level measurements, such as, for example, frequency measurements. In some instances the measurand itself is a number of events, such as spontaneous decays in activity measurements, or objects, such as colonies of bacteria in microbiology. Countings also play a fundamental role in everyday life. In any case, a counting is a measurement. A measurement result, according to its present definition, as given in the 'International Vocabulary of Metrology—Basic and general concepts and associated terms (VIM)', must include a specification concerning the estimated uncertainty. As concerns measurements by counting, this specification is not easy to encompass in the well-known framework of the 'Guide to the Expression of Uncertainty in Measurement', known as GUM, in which there is no guidance on the topic. Furthermore, the issue of uncertainty in countings has received little or no attention in the literature, so that it is commonly accepted that this category of measurements constitutes an exception in which the concept of uncertainty is not applicable, or, alternatively, that results of measurements by counting have essentially no uncertainty. In this paper we propose a general model for measurements by counting which allows an uncertainty evaluation compliant with the general framework of the GUM.

  1. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  2. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  3. A New Mathematical Framework for Design Under Uncertainty

    DTIC Science & Technology

    2016-05-05

    sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on

  4. Principals' Sense of Uncertainty and Organizational Learning Mechanisms

    ERIC Educational Resources Information Center

    Schechter, Chen; Asher, Neomi

    2012-01-01

    Purpose: The purpose of the present study is to examine the effect of principals' sense of uncertainty on organizational learning mechanisms (OLMs) in schools. Design/methodology/approach: Data were collected from 130 school principals (90 women and 40 men) from both Tel-Aviv and Central districts in Israel. After computing the correlation between…

  5. Propagation of Uncertainty in System Parameters of a LWR Model by Sampling MCNPX Calculations - Burnup Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.

    2014-06-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.

  6. A comparative study of new non-linear uncertainty propagation methods for space surveillance

    NASA Astrophysics Data System (ADS)

    Horwood, Joshua T.; Aristoff, Jeffrey M.; Singh, Navraj; Poore, Aubrey B.

    2014-06-01

    We propose a unified testing framework for assessing uncertainty realism during non-linear uncertainty propagation under the perturbed two-body problem of celestial mechanics, with an accompanying suite of metrics and benchmark test cases on which to validate different methods. We subsequently apply the testing framework to different combinations of uncertainty propagation techniques and coordinate systems for representing the uncertainty. In particular, we recommend the use of a newly-derived system of orbital element coordinates that mitigate the non-linearities in uncertainty propagation and the recently-developed Gauss von Mises filter which, when used in tandem, provide uncertainty realism over much longer periods of time compared to Gaussian representations of uncertainty in Cartesian spaces, at roughly the same computational cost.

  7. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    PubMed

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G; Ring, David; Parisien, Robert

    2016-06-01

    Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if other factors are involved. With added experience, uncertainty could be expected to diminish, but perhaps more influential are things like physician confidence, belief in the veracity of what is published, and even one's religious beliefs. In addition, it is plausible that the kind of practice a physician works in can affect the experience of uncertainty. Practicing physicians may not be immediately aware of these effects on how uncertainty is experienced in their clinical decision-making. We asked: (1) Does uncertainty and overconfidence bias decrease with years of practice? (2) What sociodemographic factors are independently associated with less recognition of uncertainty, in particular belief in God or other deity or deities, and how is atheism associated with recognition of uncertainty? (3) Do confidence bias (confidence that one's skill is greater than it actually is), degree of trust in the orthopaedic evidence, and degree of statistical sophistication correlate independently with recognition of uncertainty? We created a survey to establish an overall recognition of uncertainty score (four questions), trust in the orthopaedic evidence base (four questions), confidence bias (three questions), and statistical understanding (six questions). Seven hundred six members of the Science of Variation Group, a collaboration that aims to study variation in the definition and treatment of human illness, were approached to complete our survey. This group represents mainly orthopaedic surgeons specializing in trauma or hand and wrist surgery, practicing in Europe and North America, of whom the majority is involved in teaching. Approximately half of the group has more than 10 years

  8. Position-momentum uncertainty relations based on moments of arbitrary order

    SciTech Connect

    Zozor, Steeve; Portesi, Mariela; Sanchez-Moreno, Pablo; Dehesa, Jesus S.

    2011-05-15

    The position-momentum uncertainty-like inequality based on moments of arbitrary order for d-dimensional quantum systems, which is a generalization of the celebrated Heisenberg formulation of the uncertainty principle, is improved here by use of the Renyi-entropy-based uncertainty relation. The accuracy of the resulting lower bound is physico-computationally analyzed for the two main prototypes in d-dimensional physics: the hydrogenic and oscillator-like systems.

  9. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix

  10. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    SciTech Connect

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  11. Uncertainty quantification in coronary blood flow simulations: Impact of geometry, boundary conditions and blood viscosity.

    PubMed

    Sankaran, Sethuraman; Kim, Hyun Jin; Choi, Gilwoo; Taylor, Charles A

    2016-08-16

    Computational fluid dynamic methods are currently being used clinically to simulate blood flow and pressure and predict the functional significance of atherosclerotic lesions in patient-specific models of the coronary arteries extracted from noninvasive coronary computed tomography angiography (cCTA) data. One such technology, FFRCT, or noninvasive fractional flow reserve derived from CT data, has demonstrated high diagnostic accuracy as compared to invasively measured fractional flow reserve (FFR) obtained with a pressure wire inserted in the coronary arteries during diagnostic cardiac catheterization. However, uncertainties in modeling as well as measurement results in differences between these predicted and measured hemodynamic indices. Uncertainty in modeling can manifest in two forms - anatomic uncertainty resulting in error of the reconstructed 3D model and physiologic uncertainty resulting in errors in boundary conditions or blood viscosity. We present a data-driven framework for modeling these uncertainties and study their impact on blood flow simulations. The incompressible Navier-Stokes equations are used to model blood flow and an adaptive stochastic collocation method is used to model uncertainty propagation in the Navier-Stokes equations. We perform uncertainty quantification in two geometries, an idealized stenosis model and a patient specific model. We show that uncertainty in minimum lumen diameter (MLD) has the largest impact on hemodynamic simulations, followed by boundary resistance, viscosity and lesion length. We show that near the diagnostic cutoff (FFRCT=0.8), the uncertainty due to the latter three variables are lower than measurement uncertainty, while the uncertainty due to MLD is only slightly higher than measurement uncertainty. We also show that uncertainties are not additive but only slightly higher than the highest single parameter uncertainty. The method presented here can be used to output interval estimates of hemodynamic indices

  12. Intolerance of uncertainty in emotional disorders: What uncertainties remain?

    PubMed

    Shihata, Sarah; McEvoy, Peter M; Mullan, Barbara Ann; Carleton, R Nicholas

    2016-06-01

    The current paper presents a future research agenda for intolerance of uncertainty (IU), which is a transdiagnostic risk and maintaining factor for emotional disorders. In light of the accumulating interest and promising research on IU, it is timely to emphasize the theoretical and therapeutic significance of IU, as well as to highlight what remains unknown about IU across areas such as development, assessment, behavior, threat and risk, and relationships to cognitive vulnerability factors and emotional disorders. The present paper was designed to provide a synthesis of what is known and unknown about IU, and, in doing so, proposes broad and novel directions for future research to address the remaining uncertainties in the literature.

  13. Extending BEAMS to incorporate correlated systematic uncertainties

    SciTech Connect

    Knights, Michelle; Bassett, Bruce A.; Varughese, Melvin; Newling, James; Hlozek, Renée; Kunz, Martin; Smith, Mat E-mail: bruce@saao.ac.za E-mail: renee.hlozek@gmail.com E-mail: matsmith2@gmail.com

    2013-01-01

    New supernova surveys such as the Dark Energy Survey, Pan-STARRS and the LSST will produce an unprecedented number of photometric supernova candidates, most with no spectroscopic data. Avoiding biases in cosmological parameters due to the resulting inevitable contamination from non-Ia supernovae can be achieved with the BEAMS formalism, allowing for fully photometric supernova cosmology studies. Here we extend BEAMS to deal with the case in which the supernovae are correlated by systematic uncertainties. The analytical form of the full BEAMS posterior requires evaluating 2{sup N} terms, where N is the number of supernova candidates. This 'exponential catastrophe' is computationally unfeasible even for N of order 100. We circumvent the exponential catastrophe by marginalising numerically instead of analytically over the possible supernova types: we augment the cosmological parameters with nuisance parameters describing the covariance matrix and the types of all the supernovae, τ{sub i}, that we include in our MCMC analysis. We show that this method deals well even with large, unknown systematic uncertainties without a major increase in computational time, whereas ignoring the correlations can lead to significant biases and incorrect credible contours. We then compare the numerical marginalisation technique with a perturbative expansion of the posterior based on the insight that future surveys will have exquisite light curves and hence the probability that a given candidate is a Type Ia will be close to unity or zero, for most objects. Although this perturbative approach changes computation of the posterior from a 2{sup N} problem into an N{sup 2} or N{sup 3} one, we show that it leads to biases in general through a small number of misclassifications, implying that numerical marginalisation is superior.

  14. Extending BEAMS to incorporate correlated systematic uncertainties

    NASA Astrophysics Data System (ADS)

    Knights, Michelle; Bassett, Bruce A.; Varughese, Melvin; Hlozek, Renée; Kunz, Martin; Smith, Mat; Newling, James

    2013-01-01

    New supernova surveys such as the Dark Energy Survey, Pan-STARRS and the LSST will produce an unprecedented number of photometric supernova candidates, most with no spectroscopic data. Avoiding biases in cosmological parameters due to the resulting inevitable contamination from non-Ia supernovae can be achieved with the BEAMS formalism, allowing for fully photometric supernova cosmology studies. Here we extend BEAMS to deal with the case in which the supernovae are correlated by systematic uncertainties. The analytical form of the full BEAMS posterior requires evaluating 2N terms, where N is the number of supernova candidates. This `exponential catastrophe' is computationally unfeasible even for N of order 100. We circumvent the exponential catastrophe by marginalising numerically instead of analytically over the possible supernova types: we augment the cosmological parameters with nuisance parameters describing the covariance matrix and the types of all the supernovae, τi, that we include in our MCMC analysis. We show that this method deals well even with large, unknown systematic uncertainties without a major increase in computational time, whereas ignoring the correlations can lead to significant biases and incorrect credible contours. We then compare the numerical marginalisation technique with a perturbative expansion of the posterior based on the insight that future surveys will have exquisite light curves and hence the probability that a given candidate is a Type Ia will be close to unity or zero, for most objects. Although this perturbative approach changes computation of the posterior from a 2N problem into an N2 or N3 one, we show that it leads to biases in general through a small number of misclassifications, implying that numerical marginalisation is superior.

  15. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2015-11-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from [Berta et al., Nat. Phys. 6, 659 (2010)] is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the "uncertainty witness" lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from [Coles et al., Phys. Rev. Lett. 108, 210405 (2012)] makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM Quantum Experience and find reasonable agreement between our predictions and experimental outcomes.

  16. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2016-07-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.

  17. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    SciTech Connect

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  18. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  19. [DNA computing].

    PubMed

    Błasiak, Janusz; Krasiński, Tadeusz; Popławski, Tomasz; Sakowski, Sebastian

    2011-01-01

    Biocomputers can be an alternative for traditional "silicon-based" computers, which continuous development may be limited due to further miniaturization (imposed by the Heisenberg Uncertainty Principle) and increasing the amount of information between the central processing unit and the main memory (von Neuman bottleneck). The idea of DNA computing came true for the first time in 1994, when Adleman solved the Hamiltonian Path Problem using short DNA oligomers and DNA ligase. In the early 2000s a series of biocomputer models was presented with a seminal work of Shapiro and his colleguas who presented molecular 2 state finite automaton, in which the restriction enzyme, FokI, constituted hardware and short DNA oligomers were software as well as input/output signals. DNA molecules provided also energy for this machine. DNA computing can be exploited in many applications, from study on the gene expression pattern to diagnosis and therapy of cancer. The idea of DNA computing is still in progress in research both in vitro and in vivo and at least promising results of these research allow to have a hope for a breakthrough in the computer science.

  20. Users manual for the FORSS sensitivity and uncertainty analysis code system

    SciTech Connect

    Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.

    1981-01-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.

  1. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  2. Incorporating Uncertainty into Spacecraft Mission and Trajectory Design

    NASA Astrophysics Data System (ADS)

    Juliana D., Feldhacker

    The complex nature of many astrodynamic systems often leads to high computational costs or degraded accuracy in the analysis and design of spacecraft missions, and the incorporation of uncertainty into the trajectory optimization process often becomes intractable. This research applies mathematical modeling techniques to reduce computational cost and improve tractability for design, optimization, uncertainty quantication (UQ) and sensitivity analysis (SA) in astrodynamic systems and develops a method for trajectory optimization under uncertainty (OUU). This thesis demonstrates the use of surrogate regression models and polynomial chaos expansions for the purpose of design and UQ in the complex three-body system. Results are presented for the application of the models to the design of mid-eld rendezvous maneuvers for spacecraft in three-body orbits. The models are shown to provide high accuracy with no a priori knowledge on the sample size required for convergence. Additionally, a method is developed for the direct incorporation of system uncertainties into the design process for the purpose of OUU and robust design; these methods are also applied to the rendezvous problem. It is shown that the models can be used for constrained optimization with orders of magnitude fewer samples than is required for a Monte Carlo approach to the same problem. Finally, this research considers an application for which regression models are not well-suited, namely UQ for the kinetic de ection of potentially hazardous asteroids under the assumptions of real asteroid shape models and uncertainties in the impact trajectory and the surface material properties of the asteroid, which produce a non-smooth system response. An alternate set of models is presented that enables analytic computation of the uncertainties in the imparted momentum from impact. Use of these models for a survey of asteroids allows conclusions to be drawn on the eects of an asteroid's shape on the ability to

  3. Climate negotiations under scientific uncertainty

    PubMed Central

    Barrett, Scott; Dannenberg, Astrid

    2012-01-01

    How does uncertainty about “dangerous” climate change affect the prospects for international cooperation? Climate negotiations usually are depicted as a prisoners’ dilemma game; collectively, countries are better off reducing their emissions, but self-interest impels them to keep on emitting. We provide experimental evidence, grounded in an analytical framework, showing that the fear of crossing a dangerous threshold can turn climate negotiations into a coordination game, making collective action to avoid a dangerous threshold virtually assured. These results are robust to uncertainty about the impact of crossing a threshold, but uncertainty about the location of the threshold turns the game back into a prisoners’ dilemma, causing cooperation to collapse. Our research explains the paradox of why countries would agree to a collective goal, aimed at reducing the risk of catastrophe, but act as if they were blind to this risk. PMID:23045685

  4. Sub-Heisenberg phase uncertainties

    NASA Astrophysics Data System (ADS)

    Pezzé, Luca

    2013-12-01

    Phase shift estimation with uncertainty below the Heisenberg limit, ΔϕHL∝1/N¯T, where N¯T is the total average number of particles employed, is a mirage of linear quantum interferometry. Recently, Rivas and Luis, [New J. Phys.NJOPFM1367-263010.1088/1367-2630/14/9/093052 14, 093052 (2012)] proposed a scheme to achieve a phase uncertainty Δϕ∝1/N¯Tk, with k an arbitrary exponent. This sparked an intense debate in the literature which, ultimately, does not exclude the possibility to overcome ΔϕHL at specific phase values. Our numerical analysis of the Rivas and Luis proposal shows that sub-Heisenberg uncertainties are obtained only when the estimator is strongly biased. No violation of the Heisenberg limit is found after bias correction or when using a bias-free Bayesian analysis.

  5. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  6. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  7. CO2 transport uncertainties from the uncertainties in meteorological fields

    NASA Astrophysics Data System (ADS)

    Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun

    2011-06-01

    Inference of surface CO2 fluxes from atmospheric CO2 observations requires information about large-scale transport and turbulent mixing in the atmosphere, so transport errors and the statistics of the transport errors have significant impact on surface CO2 flux estimation. In this paper, we assimilate raw meteorological observations every 6 hours into a general circulation model with a prognostic carbon cycle (CAM3.5) using the Local Ensemble Transform Kalman Filter (LETKF) to produce an ensemble of meteorological analyses that represent the best approximation to the atmospheric circulation and its uncertainty. We quantify CO2 transport uncertainties resulting from the uncertainties in meteorological fields by running CO2 ensemble forecasts within the LETKF-CAM3.5 system forced by prescribed surface fluxes. We show that CO2 transport uncertainties are largest over the tropical land and the areas with large fossil fuel emissions, and are between 1.2 and 3.5 ppm at the surface and between 0.8 and 1.8 ppm in the column-integrated CO2 (with OCO-2-like averaging kernel) over these regions. We further show that the current practice of using a single meteorological field to transport CO2 has weaker vertical mixing and stronger CO2 vertical gradient when compared to the mean of the ensemble CO2 forecasts initialized by the ensemble meteorological fields, especially over land areas. The magnitude of the difference at the surface can be up to 1.5 ppm.

  8. Survey and Evaluate Uncertainty Quantification Methodologies

    SciTech Connect

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  9. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    SciTech Connect

    Safta, Cosmin; Najm, Habib N.; Phipps, Eric Todd

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  10. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  11. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  12. Uncertainties in climate data sets

    NASA Technical Reports Server (NTRS)

    Mcguirk, James P.

    1992-01-01

    Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.

  13. Uncertainties in offsite consequence analysis

    SciTech Connect

    Young, M.L.; Harper, F.T.; Lui, C.H.

    1996-03-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.

  14. Awe, uncertainty, and agency detection.

    PubMed

    Valdesolo, Piercarlo; Graham, Jesse

    2014-01-01

    Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events.

  15. Statistics, Uncertainty, and Transmitted Variation

    SciTech Connect

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  16. Adaptive second-order sliding mode control with uncertainty compensation

    NASA Astrophysics Data System (ADS)

    Bartolini, G.; Levant, A.; Pisano, A.; Usai, E.

    2016-09-01

    This paper endows the second-order sliding mode control (2-SMC) approach with additional capabilities of learning and control adaptation. We present a 2-SMC scheme that estimates and compensates for the uncertainties affecting the system dynamics. It also adjusts the discontinuous control effort online, so that it can be reduced to arbitrarily small values. The proposed scheme is particularly useful when the available information regarding the uncertainties is conservative, and the classical `fixed-gain' SMC would inevitably lead to largely oversized discontinuous control effort. Benefits from the viewpoint of chattering reduction are obtained, as confirmed by computer simulations.

  17. Probabilistic simulation of uncertainties in composite uniaxial strengths

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Stock, T. A.

    1990-01-01

    Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.

  18. Uncertainty quantification in lattice QCD calculations for nuclear physics

    SciTech Connect

    Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.

    2015-02-05

    The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.

  19. Systemic change increases model projection uncertainty

    NASA Astrophysics Data System (ADS)

    Verstegen, Judith; Karssenberg, Derek; van der Hilst, Floor; Faaij, André

    2014-05-01

    the neighbourhood doubled, while the influence of slope and potential yield decreased by 75% and 25% respectively. Allowing these systemic changes to occur in our CA in the future (up to 2022) resulted in an increase in model projection uncertainty by a factor two compared to the assumption of a stationary system. This means that the assumption of a constant model structure is not adequate and largely underestimates uncertainty in the projection. References Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53, 121-136. Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2012. Spatio-Temporal Uncertainty in Spatial Decision Support Systems: a Case Study of Changing Land Availability for Bioenergy Crops in Mozambique. Computers , Environment and Urban Systems 36, 30-42. Wald, A., Wolfowitz, J., 1940. On a test whether two samples are from the same population. The Annals of Mathematical Statistics 11, 147-162.

  20. SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations

    PubMed Central

    2013-01-01

    Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408

  1. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  2. I Am Sure There May Be a Planet There: Student articulation of uncertainty in argumentation tasks

    NASA Astrophysics Data System (ADS)

    Buck, Zoë E.; Lee, Hee-Sun; Flores, Joanna

    2014-09-01

    We investigated how students articulate uncertainty when they are engaged in structured scientific argumentation tasks where they generate, examine, and interpret data to determine the existence of exoplanets. In this study, 302 high school students completed 4 structured scientific arguments that followed a series of computer-model-based curriculum module activities simulating the radial velocity and/or the transit method. Structured scientific argumentation tasks involved claim, explanation, uncertainty rating, and uncertainty rationale. We explored (1) how students are articulating uncertainty within the various elements of the task and (2) the relationship between the way the task is presented and the way students are articulating uncertainty. We found that (1) while the majority of students did not express uncertainty in either explanation or uncertainty rationale, students who did express uncertainty in their explanations did so scientifically without being prompted explicitly, (2) students' uncertainty ratings and rationales revealed a mix of their personal confidence and uncertainty related to science, and (3) if a task presented noisy data, students were less likely to express uncertainty in their explanations.

  3. Uncertainty assessment in watershed-scale water quality modeling and management: 1. Framework and application of generalized likelihood uncertainty estimation (GLUE) approach

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Keller, Arturo A.

    2007-08-01

    Watershed-scale water quality models involve substantial uncertainty in model output because of sparse water quality observations and other sources of uncertainty. Assessing the uncertainty is very important for those who use the models to support management decision making. Systematic uncertainty analysis for these models has rarely been done and remains a major challenge. This study aimed (1) to develop a framework to characterize all important sources of uncertainty and their interactions in management-oriented watershed modeling, (2) to apply the generalized likelihood uncertainty estimation (GLUE) approach for quantifying simulation uncertainty for complex watershed models, and (3) to investigate the influence of subjective choices (especially the likelihood measure) in a GLUE analysis, as well as the availability of observational data, on the outcome of the uncertainty analysis. A two-stage framework was first established as the basis for uncertainty assessment and probabilistic decision-making. A watershed model (watershed analysis risk management framework (WARMF)) was implemented using data from the Santa Clara River Watershed in southern California. A typical catchment was constructed on which a series of experiments was conducted. The results show that GLUE can be implemented with affordable computational cost, yielding insights into the model behavior. However, in complex watershed water quality modeling, the uncertainty results highly depend on the subjective choices made by the modeler as well as the availability of observational data. The importance of considering management concerns in the uncertainty estimation was also demonstrated. Overall, this study establishes guidance for uncertainty assessment in management-oriented watershed modeling. The study results have suggested future efforts we could make in a GLUE-based uncertainty analysis, which has led to the development of a new method, as will be introduced in a companion paper. Eventually, the

  4. Structural-acoustic modeling of automotive vehicles in presence of uncertainties and experimental identification and validation.

    PubMed

    Durand, Jean-François; Soize, Christian; Gagliardini, Laurent

    2008-09-01

    The design of cars is mainly based on the use of computational models to analyze structural vibrations and internal acoustic levels. Considering the very high complexity of such structural-acoustic systems, and in order to improve the robustness of such computational structural-acoustic models, both model uncertainties and data uncertainties must be taken into account. In this context, a probabilistic approach of uncertainties is implemented in an adapted computational structural-acoustic model. The two main problems are the experimental identification of the parameters controlling the uncertainty levels and the experimental validation. Relevant experiments have especially been developed for this research in order to constitute an experimental database devoted to structural vibrations and internal acoustic pressures. This database is used to perform the experimental identification of the probability model parameters and to validate the stochastic computational model.

  5. UncertWeb: chaining web services accounting for uncertainty

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Jones, Richard; Bastin, Lucy; Williams, Matthew; Pebesma, Edzer; Nativi, Stefano

    2010-05-01

    The development of interoperable services that permit access to data and processes, typically using web service based standards opens up the possibility for increasingly complex chains of data and processes, which might be discovered and composed in increasingly automatic ways. This concept, sometimes referred to as the "Model Web", offers the promise of integrated (Earth) system models, with pluggable web service based components which can be discovered, composed and evaluated dynamically. A significant issue with such service chains, indeed in any composite model composed of coupled components, is that in all interesting (non-linear) cases the effect of uncertainties on inputs, or components within the chain will have complex, potentially unexpected effects on the outputs. Within the FP7 UncertWeb project we will be developing a mechanism and an accompanying set of tools to enable rigorous uncertainty management in web based service chains involving both data and processes. The project will exploit and extend the UncertML candidate standard to flexibly propagate uncertainty through service chains, including looking at mechanisms to develop uncertainty enabled profiles of existing Open Geospatial Consortium services. To facilitate the use of such services we will develop tools to address the definition of the input uncertainties (elicitation), manage the uncertainty propagation (emulation), undertake uncertainty and sensitivity analysis and visualise the output uncertainty. In this talk we will outline the challenges of the UncertWeb project, illustrating this with a prototype service chain we have created for correcting station level pressure to sea-level pressure, which accounts for the various uncertainties involved. In particular we will discuss some of the challenges of chaining Open Geospatial Consortium services using the Business Process Execution Language. We will also address the issue of computational cost and communication bandwidth requirements for

  6. Adaptive Strategies for Materials Design using Uncertainties

    PubMed Central

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-01

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties. PMID:26792532

  7. Uncertainty in Vs30-based site response

    USGS Publications Warehouse

    Thompson, Eric; Wald, David J.

    2016-01-01

    Methods that account for site response range in complexity from simple linear categorical adjustment factors to sophisticated nonlinear constitutive models. Seismic‐hazard analysis usually relies on ground‐motion prediction equations (GMPEs); within this framework site response is modeled statistically with simplified site parameters that include the time‐averaged shear‐wave velocity to 30 m (VS30) and basin depth parameters. Because VS30 is not known in most locations, it must be interpolated or inferred through secondary information such as geology or topography. In this article, we analyze a subset of stations for which VS30 has been measured to address effects of VS30 proxies on the uncertainty in the ground motions as modeled by GMPEs. The stations we analyze also include multiple recordings, which allow us to compute the repeatable site effects (or empirical amplification factors [EAFs]) from the ground motions. Although all methods exhibit similar bias, the proxy methods only reduce the ground‐motion standard deviations at long periods when compared to GMPEs without a site term, whereas measured VS30 values reduce the standard deviations at all periods. The standard deviation of the ground motions are much lower when the EAFs are used, indicating that future refinements of the site term in GMPEs have the potential to substantially reduce the overall uncertainty in the prediction of ground motions by GMPEs.

  8. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  9. Probabilistic simulation of uncertainties in thermal structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Shiao, Michael

    1990-01-01

    Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.

  10. Adaptive strategies for materials design using uncertainties

    SciTech Connect

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  11. Uncertainty quantification of limit-cycle oscillations

    SciTech Connect

    Beran, Philip S. . E-mail: pettitcl@usna.edu; Millman, Daniel R. . E-mail: daniel.millman@edwards.af.mil

    2006-09-01

    Different computational methodologies have been developed to quantify the uncertain response of a relatively simple aeroelastic system in limit-cycle oscillation, subject to parametric variability. The aeroelastic system is that of a rigid airfoil, supported by pitch and plunge structural coupling, with nonlinearities in the component in pitch. The nonlinearities are adjusted to permit the formation of a either a subcritical or supercritical branch of limit-cycle oscillations. Uncertainties are specified in the cubic coefficient of the torsional spring and in the initial pitch angle of the airfoil. Stochastic projections of the time-domain and cyclic equations governing system response are carried out, leading to both intrusive and non-intrusive computational formulations. Non-intrusive formulations are examined using stochastic projections derived from Wiener expansions involving Haar wavelet and B-spline bases, while Wiener-Hermite expansions of the cyclic equations are employed intrusively and non-intrusively. Application of the B-spline stochastic projection is extended to the treatment of aerodynamic nonlinearities, as modeled through the discrete Euler equations. The methodologies are compared in terms of computational cost, convergence properties, ease of implementation, and potential for application to complex aeroelastic systems.

  12. Uncertainty Can Increase Explanatory Credibility

    DTIC Science & Technology

    2013-08-01

    metacognitive cue to infer their conversational partner’s depth of processing . Keywords: explanations, confidence, uncertainty, collaborative reasoning...scope, i.e., those that account for only observed phenomena (Khemlani, Sussman, & Oppenheimer , 2011). These preferences show that properties intrinsic...Fischhoff, & Phillips , 1982; Lindley, 1982; McClelland & Bolger, 1994). Much of the research on subjective confidence addresses how individuals

  13. Exploring Uncertainty with Projectile Launchers

    ERIC Educational Resources Information Center

    Orzel, Chad; Reich, Gary; Marr, Jonathan

    2012-01-01

    The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…

  14. Estimating uncertainties in watershed studies

    Treesearch

    John Campbell; Ruth Yanai; Mark. Green

    2011-01-01

    Small watersheds have been used widely to quantify chemical fluxes and cycling in terrestrial ecosystems for about the past half century. The small watershed approach has been valuable in characterizing hydrologic and nutrient budgets, for instance, in estimating the net gain or loss of solutes in response to disturbance. However, the uncertainty in these ecosystem...

  15. Spatial uncertainty and ecological models

    SciTech Connect

    Jager, Yetta; King, Anthony Wayne

    2004-07-01

    Applied ecological models that are used to understand and manage natural systems often rely on spatial data as input. Spatial uncertainty in these data can propagate into model predictions. Uncertainty analysis, sensitivity analysis, error analysis, error budget analysis, spatial decision analysis, and hypothesis testing using neutral models are all techniques designed to explore the relationship between variation in model inputs and variation in model predictions. Although similar methods can be used to answer them, these approaches address different questions. These approaches differ in (a) whether the focus is forward or backward (forward to evaluate the magnitude of variation in model predictions propagated or backward to rank input parameters by their influence); (b) whether the question involves model robustness to large variations in spatial pattern or to small deviations from a reference map; and (c) whether processes that generate input uncertainty (for example, cartographic error) are of interest. In this commentary, we propose a taxonomy of approaches, all of which clarify the relationship between spatial uncertainty and the predictions of ecological models. We describe existing techniques and indicate a few areas where research is needed.

  16. Saccade Adaptation and Visual Uncertainty

    PubMed Central

    Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635

  17. Exploring Uncertainty with Projectile Launchers

    ERIC Educational Resources Information Center

    Orzel, Chad; Reich, Gary; Marr, Jonathan

    2012-01-01

    The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…

  18. Quantum coherence versus quantum uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Shunlong; Sun, Yuan

    2017-08-01

    The notion of measurement is of both foundational and instrumental significance in quantum mechanics, and coherence destroyed by measurements (decoherence) lies at the very heart of quantum to classical transition. Qualitative aspects of this spirit have been widely recognized and analyzed ever since the inception of quantum theory. However, axiomatic and quantitative investigations of coherence are attracting great interest only recently with several figures of merit for coherence introduced [Baumgratz, Cramer, and Plenio, Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. While these resource theoretic approaches have many appealing and intuitive features, they rely crucially on various notions of incoherent operations which are sophisticated, subtle, and not uniquely defined, as have been critically assessed [Chitambar and Gour, Phys. Rev. Lett. 117, 030401 (2016), 10.1103/PhysRevLett.117.030401]. In this paper, we elaborate on the idea that coherence and quantum uncertainty are dual viewpoints of the same quantum substrate, and address coherence quantification by identifying coherence of a state (with respect to a measurement) with quantum uncertainty of a measurement (with respect to a state). Consequently, coherence measures may be set into correspondence with measures of quantum uncertainty. In particular, we take average quantum Fisher information as a measure of quantum uncertainty, and introduce the corresponding measure of coherence, which is demonstrated to exhibit desirable properties. Implications for interpreting quantum purity as maximal coherence, and quantum discord as minimal coherence, are illustrated.

  19. The face of uncertainty eats.

    PubMed

    Corwin, Rebecca L W

    2011-09-01

    The idea that foods rich in fat and sugar may be addictive has generated much interest, as well as controversy, among both scientific and lay communities. Recent research indicates that fatty and sugary food in-and-of itself is not addictive. Rather, the food and the context in which it is consumed interact to produce an addiction-like state. One of the contexts that appears to be important is the intermittent opportunity to consume foods rich in fat and sugar in environments where food is plentiful. Animal research indicates that, under these conditions, intake of the fatty sugary food escalates across time and binge-type behavior develops. However, the mechanisms that account for the powerful effect of intermittency on ingestive behavior have only begun to be elucidated. In this review, it is proposed that intermittency stimulates appetitive behavior that is associated with uncertainty regarding what, when, and how much of the highly palatable food to consume. Uncertainty may stimulate consumption of optional fatty and sugary treats due to differential firing of midbrain dopamine neurons, activation of the stress axis, and involvement of orexin signaling. In short, uncertainty may produce an aversive state that bingeing on palatable food can alleviate, however temporarily. "Food addiction" may not be "addiction" to food at all; it may be a response to uncertainty within environments of food abundance.

  20. Structural Damage Assessment under Uncertainty

    NASA Astrophysics Data System (ADS)

    Lopez Martinez, Israel

    Structural damage assessment has applications in the majority of engineering structures and mechanical systems ranging from aerospace vehicles to manufacturing equipment. The primary goals of any structural damage assessment and health monitoring systems are to ascertain the condition of a structure and to provide an evaluation of changes as a function of time as well as providing an early-warning of an unsafe condition. There are many structural heath monitoring and assessment techniques developed for research using numerical simulations and scaled structural experiments. However, the transition from research to real-world structures has been rather slow. One major reason for this slow-progress is the existence of uncertainty in every step of the damage assessment process. This dissertation research involved the experimental and numerical investigation of uncertainty in vibration-based structural health monitoring and development of robust detection and localization methods. The basic premise of vibration-based structural health monitoring is that changes in structural characteristics, such as stiffness, mass and damping, will affect the global vibration response of the structure. The diagnostic performance of vibration-based monitoring system is affected by uncertainty sources such as measurement errors, environmental disturbances and parametric modeling uncertainties. To address diagnostic errors due to irreducible uncertainty, a pattern recognition framework for damage detection has been developed to be used for continuous monitoring of structures. The robust damage detection approach developed is based on the ensemble of dimensional reduction algorithms for improved damage-sensitive feature extraction. For damage localization, the determination of an experimental structural model was performed based on output-only modal analysis. An experimental model correlation technique is developed in which the discrepancies between the undamaged and damaged modal data are

  1. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  2. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  3. Uncertainty propagation using Wiener-Haar expansions

    NASA Astrophysics Data System (ADS)

    Le Maı̂tre, O. P.; Knio, O. M.; Najm, H. N.; Ghanem, R. G.

    2004-06-01

    An uncertainty quantification scheme is constructed based on generalized Polynomial Chaos (PC) representations. Two such representations are considered, based on the orthogonal projection of uncertain data and solution variables using either a Haar or a Legendre basis. Governing equations for the unknown coefficients in the resulting representations are derived using a Galerkin procedure and then integrated in order to determine the behavior of the stochastic process. The schemes are applied to a model problem involving a simplified dynamical system and to the classical problem of Rayleigh-Bénard instability. For situations involving random parameters close to a critical point, the computational implementations show that the Wiener-Haar (WHa) representation provides more robust predictions that those based on a Wiener-Legendre (WLe) decomposition. However, when the solution depends smoothly on the random data, the WLe scheme exhibits superior convergence. Suggestions regarding future extensions are finally drawn based on these experiences.

  4. Quantifying and reducing uncertainties in cancer therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Liu, Zhonglin; Caucci, Luca; Hoppin, John W.

    2015-03-01

    There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.

  5. Stereo-particle image velocimetry uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  6. Groundwater management under sustainable yield uncertainty

    NASA Astrophysics Data System (ADS)

    Delottier, Hugo; Pryet, Alexandre; Dupuy, Alain

    2015-04-01

    The definition of the sustainable yield (SY) of a groundwater system consists in adjusting pumping rates so as to avoid groundwater depletion and preserve environmental flows. Once stakeholders have defined which impacts can be considered as "acceptable" for both environmental and societal aspects, hydrogeologists use groundwater models to estimate the SY. Yet, these models are based on a simplification of actual groundwater systems, whose hydraulic properties are largely unknown. As a result, the estimated SY is subject to "predictive" uncertainty. We illustrate the issue with a synthetic homogeneous aquifer system in interaction with a stream for steady state and transient conditions. Simulations are conducted with the USGS MODFLOW finite difference model with the river-package. A synthetic dataset is first generated with the numerical model that will further be considered as the "observed" state. In a second step, we conduct the calibration operation as hydrogeologists dealing with real word, unknown groundwater systems. The RMSE between simulated hydraulic heads and the synthetic "observed" values is used as objective function. But instead of simply "calibrating" model parameters, we explore the value of the objective function in the parameter space (hydraulic conductivity, storage coefficient and total recharge). We highlight the occurrence of an ellipsoidal "null space", where distinct parameter sets lead to equally low values for the objective function. The optimum of the objective function is not unique, which leads to a range of possible values for the SY. With a large confidence interval for the SY, the use of modeling results for decision-making is challenging. We argue that prior to modeling operations, efforts must be invested so as to narrow the intervals of likely parameter values. Parameter space exploration is effective to estimate SY uncertainty, but not efficient because of its computational burden and is therefore inapplicable for real world

  7. LDRD Final Report: Capabilities for Uncertainty in Predictive Science.

    SciTech Connect

    Phipps, Eric Todd; Eldred, Michael S; Salinger, Andrew G.; Webster, Clayton G.

    2008-10-01

    Predictive simulation of systems comprised of numerous interconnected, tightly coupled com-ponents promises to help solve many problems of scientific and national interest. Howeverpredictive simulation of such systems is extremely challenging due to the coupling of adiverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure togain computational efficiency. The traditional layering of uncertainty quantification aroundnonlinear solution processes is inverted to allow for heterogeneous uncertainty quantificationmethods to be applied to each component in a coupled system. Moreover this approachallows stochastic dimension reduction techniques to be applied at each coupling interface.The mathematical feasibility of these ideas is investigated in this report, and mathematicalformulations for the resulting stochastically coupled nonlinear systems are developed.3

  8. Uncertainty Quantification For Gas-Surface Interaction In Plasmatron Facility

    NASA Astrophysics Data System (ADS)

    Villedieu, N.; Panerai, F.; Chazot, O.; Magin, T. E.

    2011-08-01

    To design Thermal Protection Systems for atmospheric re-entry, it is crucial to take in account the catalytic properties of the material. At the von Karman Institute, these properties are determined by combining experiments per- formed in the Plasmatron facility and boundary layer code simulations. During this process, many uncertain- ties are involved: experimental data and physical model. The aim of this article is to develop an uncertainty quantification methodology to compute the error bars on the rebuilt enthalpy and the effective catalytic recombination coefficient due to the uncertainties on the experimental data. The purpose is also to understand which uncertainties have the largest impact on the error. We have coupled the open source software DAKOTA from Sandia National Laboratory with the VKI boundary layer code.

  9. A new look at the theory uncertainty of ϵ K

    SciTech Connect

    Ligeti, Z.; Sala, F.

    2016-09-01

    The observable ϵ K is sensitive to flavor violation at some of the highest scales. While its experimental uncertainty is at the half percent level, the theoretical one is in the ballpark of 15%. We explore the nontrivial dependence of the theory prediction and uncertainty on various conventions, like the phase of the kaon fields. In particular, we show how such a rephasing allows to make the short-distance contribution of the box diagram with two charm quarks, η cc , purely real. Our results allow to slightly reduce the total theoretical uncertainty of ϵ K , while increasing the relative impact of the imaginary part of the long distance contribution, underlining the need to compute it reliably. We also give updated bounds on the new physics operators that contribute to ϵ K .

  10. A new look at the theory uncertainty of ϵ K

    DOE PAGES

    Ligeti, Z.; Sala, F.

    2016-09-01

    The observable ϵ K is sensitive to flavor violation at some of the highest scales. While its experimental uncertainty is at the half percent level, the theoretical one is in the ballpark of 15%. We explore the nontrivial dependence of the theory prediction and uncertainty on various conventions, like the phase of the kaon fields. In particular, we show how such a rephasing allows to make the short-distance contribution of the box diagram with two charm quarks, η cc , purely real. Our results allow to slightly reduce the total theoretical uncertainty of ϵ K , while increasing the relativemore » impact of the imaginary part of the long distance contribution, underlining the need to compute it reliably. We also give updated bounds on the new physics operators that contribute to ϵ K .« less

  11. Statistical assessment of predictive modelling uncertainty: a geophysical case study

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Marotta, Anna Maria; Splendore, Raffaele; De Gaetani, Carlo; Borghi, Alessandra

    2014-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. This paper proposes a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto- and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical, spherical, thin-sheet finite element model of the Mediterranean region. Using a χ2 analysis, the model's estimated horizontal velocities are compared with the velocities estimated from permanent GPS stations while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values and might help a sharper identification of the best-fitting geophysical models.

  12. Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien

    2015-04-01

    Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a

  13. Global hydrology modelling and uncertainty: running multiple ensembles with a campus grid.

    PubMed

    Gosling, Simon N; Bretherton, Dan; Haines, Keith; Arnell, Nigel W

    2010-09-13

    Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this 'climate model structural uncertainty' is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.

  14. An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2014-01-01

    This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.

  15. Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Gumbert, Clyde

    2017-01-01

    The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.

  16. Laser triangulation: fundamental uncertainty in distance measurement.

    PubMed

    Dorsch, R G; Häusler, G; Herrmann, J M

    1994-03-01

    We discuss the uncertainty limit in distance sensing by laser triangulation. The uncertainty in distance measurement of laser triangulation sensors and other coherent sensors is limited by speckle noise. Speckle arises because of the coherent illumination in combination with rough surfaces. A minimum limit on the distance uncertainty is derived through speckle statistics. This uncertainty is a function of wavelength, observation aperture, and speckle contrast in the spot image. Surprisingly, it is the same distance uncertainty that we obtained from a single-photon experiment and from Heisenberg's uncertainty principle. Experiments confirm the theory. An uncertainty principle connecting lateral resolution and distance uncertainty is introduced. Design criteria for a sensor with minimum distanc uncertainty are determined: small temporal coherence, small spatial coherence, a large observation aperture.

  17. Visualizing Flow of Uncertainty through Analytical Processes.

    PubMed

    Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu

    2012-12-01

    Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.

  18. Decomposing river model output uncertainty in its contributing sources: a Belgian case study

    NASA Astrophysics Data System (ADS)

    Meert, Pieter; Nossent, Jiri; Pereira, Fernando; Willems, Patrick

    2017-04-01

    The use of hydrologic and hydrodynamic models in flood forecasting systems is inevitably connected with the existence of errors and uncertainties. These uncertainties are often significant and should be recognized in any modelling application. Assessing the degree of uncertainty, its impact on model results and a decomposition in its major contributing sources is therefore critically important. This delivers decision makers additional information on model accuracy, on the relative importance of individual uncertainty sources and, hence, also on efficient approaches to reduce the total model output uncertainty. In this paper, we demonstrate a data-based approach for uncertainty quantification and decomposition of river model outputs on a case study of the Belgian river Dender. Model input and model parameter uncertainties are assessed by comparing model results for uncertain inputs and parameters with historical observations or are based on expert elicitation. These uncertainties are subsequently propagated through the model, based on a Latin Hypercube Sampling procedure, to quantify the impact at locations of interest. The total model output uncertainty is separated in bias and variance components of all quantifiable error sources and a rest term uncertainty component that lumps the remaining unquantifiable and secondary error sources. Performing a detailed uncertainty analysis within a tolerable timeframe is virtually impossible with a detailed full hydrodynamic river model. Instead, a grey-box model, based on a concatenation of reservoir elements, is used. This grey-box model succeeds in accurately reproducing the detailed model results, with an important reduction of computational burden. The uncertainty decomposition results show that the relative importance of each uncertainty source is far from constant. The bias and variance contributions were found to vary over the range of modelled water levels and discharges, but they also differ spatially and for each

  19. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  20. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  1. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  2. Probabilistic forecasts based on radar rainfall uncertainty

    NASA Astrophysics Data System (ADS)

    Liguori, S.; Rico-Ramirez, M. A.

    2012-04-01

    The potential advantages resulting from integrating weather radar rainfall estimates in hydro-meteorological forecasting systems is limited by the inherent uncertainty affecting radar rainfall measurements, which is due to various sources of error [1-3]. The improvement of quality control and correction techniques is recognized to play a role for the future improvement of radar-based flow predictions. However, the knowledge of the uncertainty affecting radar rainfall data can also be effectively used to build a hydro-meteorological forecasting system in a probabilistic framework. This work discusses the results of the implementation of a novel probabilistic forecasting system developed to improve ensemble predictions over a small urban area located in the North of England. An ensemble of radar rainfall fields can be determined as the sum of a deterministic component and a perturbation field, the latter being informed by the knowledge of the spatial-temporal characteristics of the radar error assessed with reference to rain-gauges measurements. This approach is similar to the REAL system [4] developed for use in the Southern-Alps. The radar uncertainty estimate can then be propagated with a nowcasting model, used to extrapolate an ensemble of radar rainfall forecasts, which can ultimately drive hydrological ensemble predictions. A radar ensemble generator has been calibrated using radar rainfall data made available from the UK Met Office after applying post-processing and corrections algorithms [5-6]. One hour rainfall accumulations from 235 rain gauges recorded for the year 2007 have provided the reference to determine the radar error. Statistics describing the spatial characteristics of the error (i.e. mean and covariance) have been computed off-line at gauges location, along with the parameters describing the error temporal correlation. A system has then been set up to impose the space-time error properties to stochastic perturbations, generated in real-time at

  3. Estimation of uncertainty for fatigue growth rate at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.

    2014-01-01

    Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.

  4. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in

  5. 'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2017-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in

  6. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  7. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval.

    PubMed

    Liu, Mengchen; Liu, Shixia; Zhu, Xizhou; Liao, Qinying; Wei, Furu; Pan, Shimei

    2016-01-01

    Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.

  8. Measures, Uncertainties, and Significance Test in Operational ROC A