Science.gov

Sample records for avt-147 computational uncertainty

  1. Technical Evaluation Report for Symposium AVT-147: Computational Uncertainty in Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Radespiel, Rolf; Hemsch, Michael J.

    2007-01-01

    The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.

  2. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.

  3. Credible Computations: Standard and Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties

  4. Uncertainty versus computer response time

    SciTech Connect

    Rowe, W.D. |

    1994-12-31

    Interactive on-line presentation of risk analysis results with immediate ``what if`` capability is now possible with available microcomputer technology. This can provide an effective means off presenting the risk results, the decision possibilities, and the underlying assumptions to decision makers, stakeholders, and the public. However, the limitation of computer calculational power on microcomputers requires a trade-off between the precision of the analysis and the computing and display response time. Fortunately, the uncertainties in the risk analysis are usually so large that extreme precision is often unwarranted. Therefore, risk analyses used for this purpose must include trade-offs between precision and processing time, and uncertainties introduced must be put into perspective.

  5. Numerical uncertainty in computational engineering and physics

    SciTech Connect

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

  6. Uncertainty and error in computational simulations

    SciTech Connect

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  7. Some Aspects of uncertainty in computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1991-01-01

    Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.

  8. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  9. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  10. Applying uncertainty quantification to multiphase flow computational fluid dynamics

    SciTech Connect

    Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C

    2013-07-01

    Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.

  11. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  12. Uncertainty and Intelligence in Computational Stochastic Mechanics

    NASA Technical Reports Server (NTRS)

    Ayyub, Bilal M.

    1996-01-01

    Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should

  13. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  14. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  15. Uncertainty analysis for computer model projections of hurricane losses.

    PubMed

    Iman, Ronald L; Johnson, Mark E; Watson, Charles C

    2005-10-01

    Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.

  16. Elicitation of natural language representations of uncertainty using computer technology

    SciTech Connect

    Tonn, B.; Goeltz, R.; Travis, C.; Tennessee Univ., Knoxville, TN )

    1989-01-01

    Knowledge elicitation is an important aspect of risk analysis. Knowledge about risks must be accurately elicited from experts for use in risk assessments. Knowledge and perceptions of risks must also be accurately elicited from the public in order to intelligently perform policy analysis and develop and implement programs. Oak Ridge National Laboratory is developing computer technology to effectively and efficiently elicit knowledge from experts and the public. This paper discusses software developed to elicit natural language representations of uncertainty. The software is written in Common Lisp and resides on VAX Computers System and Symbolics Lisp machines. The software has three goals, to determine preferences for using natural language terms for representing uncertainty; likelihood rankings of the terms; and how likelihood estimates are combined to form new terms. The first two goals relate to providing useful results for those interested in risk communication. The third relates to providing cognitive data to further our understanding of people's decision making under uncertainty. The software is used to elicit natural language terms used to express the likelihood of various agents causing cancer in humans and cancer resulting in various maladies, and the likelihood of everyday events. 6 refs., 4 figs., 4 tabs.

  17. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  18. Uncertainty of microwave radiative transfer computations in rain

    NASA Astrophysics Data System (ADS)

    Hong, Sung Wook

    Currently, the effect of the vertical resolution on the brightness temperature (BT) has not been examined in depth. The uncertainty of the freezing level (FL) retrieved using two different satellites' data is large. Various radiative transfer (RT) codes yield different BTs in strong scattering conditions. The purposes of this research were: (1) to understand the uncertainty of the BT contributed by the vertical resolution numerically and analytically; (2) to reduce the uncertainty of the FL retrieval using new thermodynamic observations; and (3) to investigate the characteristics of four different RT codes. Firstly, a plane-parallel RT Model (RTM) of n layers in light rainfall was used for the analytical and computational derivation of the vertical resolution effect on the BT. Secondly, a new temperature profile based on observations was absorbed in the Texas A&M University (TAMU) algorithm. The Precipitation Radar (PR) and Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) data were utilized for the improved FL retrieval. Thirdly, the TAMU, Eddington approximation (EDD), Discrete Ordinate, and backward Monte Carlo codes were compared under various view angles, rain rates, FLs, frequencies, and surface properties. The uncertainty of the BT decreased as the number of layers increased. The uncertainty was due to the optical thickness rather than due to relative humidity, pressure distribution, water vapor, and temperature profile. The mean TMI FL showed a good agreement with mean bright band height. A new temperature profile reduced the uncertainty of the TMI FL by about 10%. The differences of the BTs among the four different RT codes were within 1 K at the current sensor view angle over the entire dynamic rain rate range of 10-37 GHz. The differences between the TAMU and EDD solutions were less than 0.5 K for the specular surface. In conclusion, this research suggested the vertical resolution should be considered as a parameter in the forward model

  19. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  20. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the

  1. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  2. Computations of uncertainty mediate acute stress responses in humans.

    PubMed

    de Berker, Archy O; Rutledge, Robb B; Mathys, Christoph; Marshall, Louise; Cross, Gemma F; Dolan, Raymond J; Bestmann, Sven

    2016-03-29

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function.

  3. Computations of uncertainty mediate acute stress responses in humans.

    PubMed

    de Berker, Archy O; Rutledge, Robb B; Mathys, Christoph; Marshall, Louise; Cross, Gemma F; Dolan, Raymond J; Bestmann, Sven

    2016-01-01

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. PMID:27020312

  4. Computations of uncertainty mediate acute stress responses in humans

    PubMed Central

    de Berker, Archy O.; Rutledge, Robb B.; Mathys, Christoph; Marshall, Louise; Cross, Gemma F.; Dolan, Raymond J.; Bestmann, Sven

    2016-01-01

    The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. PMID:27020312

  5. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  6. Uncertainty of mantle geophysical properties computed from phase equilibrium models

    NASA Astrophysics Data System (ADS)

    Connolly, J. A. D.; Khan, A.

    2016-05-01

    Phase equilibrium models are used routinely to predict geophysically relevant mantle properties. A limitation of this approach is that nonlinearity of the phase equilibrium problem precludes direct assessment of the resultant uncertainties. To overcome this obstacle, we stochastically assess uncertainties along self-consistent mantle adiabats for pyrolitic and basaltic bulk compositions to 2000 km depth. The dominant components of the uncertainty are the identity, composition and elastic properties of the minerals. For P wave speed and density, the latter components vary little, whereas the first is confined to the upper mantle. Consequently, P wave speeds, densities, and adiabatic temperatures and pressures predicted by phase equilibrium models are more uncertain in the upper mantle than in the lower mantle. In contrast, uncertainties in S wave speeds are dominated by the uncertainty in shear moduli and are approximately constant throughout the model depth range.

  7. Latin hypercube sampling as a tool in uncertainty analysis of computer models

    SciTech Connect

    McKay, M.D.

    1992-09-01

    This paper addresses several aspects of the analysis of uncertainty in the output of computer models arising from uncertainty in inputs (parameters). Uncertainty of this type, which is separate and distinct from the randomness of a stochastic model, most often arises when input values are guesstimates, or when they are estimated from data, or when the input parameters do not actually correspond to observable quantities, e.g., in lumped-parameter models. Uncertainty in the output is quantified in its probability distribution, which results from treating the inputs as random variables. The assessment of which inputs are important with respect to uncertainty is done relative to the probability distribution of the output.

  8. Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing

    SciTech Connect

    Datta, D.

    2010-10-26

    Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.

  9. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Spacecraft components may be damaged due to airflow produced by Environmental Control Systems (ECS). There are uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field around a spacecraft from the ECS System. This paper describes an approach to estimate the uncertainty in using CFD to predict the airflow speeds around an encapsulated spacecraft.

  10. Binding in light nuclei: Statistical NN uncertainties vs Computational accuracy

    NASA Astrophysics Data System (ADS)

    Navarro Pérez, R.; Nogga, A.; Amaro, J. E.; Ruiz Arriola, E.

    2016-08-01

    We analyse the impact of the statistical uncertainties of the the nucleon-nucleon interaction, based on the Granada-2013 np-pp database, on the binding energies of the triton and the alpha particle using a bootstrap method, by solving the Faddeev equations for 3H and the Yakubovsky equations for 4He respectively. We check that in practice about 30 samples prove enough for a reliable error estimate. An extrapolation of the well fulfilled Tjon-line correlation predicts the experimental binding of the alpha particle within uncertainties. Presented by RNP at Workshop for young scientists with research interests focused on physics at FAIR 14-19 February 2016 Garmisch-Partenkirchen (Germany).

  11. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  12. Computer simulations in room acoustics: concepts and uncertainties.

    PubMed

    Vorländer, Michael

    2013-03-01

    Geometrical acoustics are used as a standard model for room acoustic design and consulting. Research on room acoustic simulation focuses on a more accurate modeling of propagation effects such as diffraction and other wave effects in rooms, and on scattering. Much progress was made in this field so that wave models also (for example, the boundary element method and the finite differences in time domain) can now be used for higher frequencies. The concepts and implementations of room simulation methods are briefly reviewed. After all, simulations in architectural acoustics are indeed powerful tools, but their reliability depends on the skills of the operator who has to create an adequate polygon model and has to choose the correct input data of boundary conditions such as absorption and scattering. Very little is known about the uncertainty of this input data. With the theory of error propagation of uncertainties it can be shown that prediction of reverberation times with accuracy better than the just noticeable difference requires input data in a quality which is not available from reverberation room measurements.

  13. Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.

    2013-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around

  14. Establishing performance requirements of computer based systems subject to uncertainty

    SciTech Connect

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  15. A general program to compute the multivariable stability margin for systems with parametric uncertainty

    NASA Technical Reports Server (NTRS)

    Sanchez Pena, Ricardo S.; Sideris, Athanasios

    1988-01-01

    A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.

  16. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics

  17. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions

  18. Effect of Random Geometric Uncertainty on the Computational Design of a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, C. R.; Newman, P. A.; Hou, G. J.-W.

    2002-01-01

    The effect of geometric uncertainty due to statistically independent, random, normally distributed shape parameters is demonstrated in the computational design of a 3-D flexible wing. A first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic / finite element structural codes for both analysis and sensitivity analysis. First-order sensitivity derivatives obtained by automatic differentiation are used in the input uncertainty propagation. These propagated uncertainties are then used to perform a robust design of a simple 3-D flexible wing at supercritical flow conditions. The effect of the random input uncertainties is shown by comparison with conventional deterministic design results. Sample results are shown for wing planform, airfoil section, and structural sizing variables.

  19. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  20. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  1. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  2. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  3. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    NASA Astrophysics Data System (ADS)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  4. A Novel Method for the Evaluation of Uncertainty in Dose-Volume Histogram Computation

    SciTech Connect

    Henriquez, Francisco Cutanda M.Sc. Castrillon, Silvia Vargas

    2008-03-15

    Purpose: Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. Methods and Materials: To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. Results: This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Conclusions: Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.

  5. Prediction and Uncertainty in Computational Modeling of Complex Phenomena: A Whitepaper

    SciTech Connect

    Trucano, T.G.

    1999-01-20

    This report summarizes some challenges associated with the use of computational science to predict the behavior of complex phenomena. As such, the document is a compendium of ideas that have been generated by various staff at Sandia. The report emphasizes key components of the use of computational to predict complex phenomena, including computational complexity and correctness of implementations, the nature of the comparison with data, the importance of uncertainty quantification in comprehending what the prediction is telling us, and the role of risk in making and using computational predictions. Both broad and more narrowly focused technical recommendations for research are given. Several computational problems are summarized that help to illustrate the issues we have emphasized. The tone of the report is informal, with virtually no mathematics. However, we have attempted to provide a useful bibliography that would assist the interested reader in pursuing the content of this report in greater depth.

  6. PUQ: A code for non-intrusive uncertainty propagation in computer simulations

    NASA Astrophysics Data System (ADS)

    Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro

    2015-09-01

    We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.

  7. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  8. Towards Optimal Allocation of Computer Resources: Trade-offs between Uncertainty, Discretization and Model Reduction

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Leube, P.; Zinkhahn, M.; de Barros, F.; Rajagopal, R.

    2012-12-01

    In recent years, there has been an increase in the computational complexity of hydro(geo)logical models. This has been driven by new problems addressing large-scale relationships like global warming, reactive transport on the catchment scale or CO2 sequestration. Computational model complexity becomes even more drastic, when facing the ubiquitous need for uncertainty quantification and risk assessment in the environmental sciences. Computational complexity can be broken down into contributions ranging from spatial, temporal and stochastic resolution, e.g., spatial grid resolution, time step size and number of repeated simulations dedicated to quantify uncertainty. Controlling these resolutions allows keeping the computational cost at a tractable level whilst guaranteeing accurate and robust predictions. Having this possibility at hand triggers our overall driving question: What is the optimal resolution for independent variables (i.e. time and space) to achieve reliable prediction in the presence of uncertainty? Can we determine an overall optimum combination of the number of realizations, spatial and temporal resolutions, needed for overall statistical/physical convergence of model predictions? If so, how can we find it? In other words, how can we optimally allocate available computational resources in order to achieve highest accuracy associated with a given prediction goal? In this work, we present an approach that allows to determine the compromise among different model dimensions (space, time, probability) when allocating computational resources. The overall goal is to maximize the prediction accuracy given limited computational resources. Our analysis is based on the idea to jointly consider the discretization errors and computational costs of all individual model dimensions. This yields a cost-to-error surface which serves to aid modelers in finding an optimal allocation of the computational resources. As a pragmatic way to proceed, we propose running small

  9. Flood risk assessment at the regional scale: Computational challenges and the monster of uncertainty

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Papalexiou, Simon-Michael; Markonis, Yiannis; Koukouvinos, Antonis; Vasiliades, Lampros; Papaioannou, George; Loukas, Athanasios

    2016-04-01

    We present a methodological framework for flood risk assessment at the regional scale, developed within the implementation of the EU Directive 2007/60 in Greece. This comprises three phases: (a) statistical analysis of extreme rainfall data, resulting to spatially-distributed parameters of intensity-duration-frequency (IDF) relationships and their confidence intervals, (b) hydrological simulations, using event-based semi-distributed rainfall-runoff approaches, and (c) hydraulic simulations, employing the propagation of flood hydrographs across the river network and the mapping of inundated areas. The flood risk assessment procedure is employed over the River Basin District of Thessaly, Greece, which requires schematization and modelling of hundreds of sub-catchments, each one examined for several risk scenarios. This is a challenging task, involving multiple computational issues to handle, such as the organization, control and processing of huge amount of hydrometeorological and geographical data, the configuration of model inputs and outputs, and the co-operation of several software tools. In this context, we have developed supporting applications allowing massive data processing and effective model coupling, thus drastically reducing the need for manual interventions and, consequently, the time of the study. Within flood risk computations we also account for three major sources of uncertainty, in an attempt to provide upper and lower confidence bounds of flood maps, i.e. (a) statistical uncertainty of IDF curves, (b) structural uncertainty of hydrological models, due to varying anteceded soil moisture conditions, and (c) parameter uncertainty of hydraulic models, with emphasis to roughness coefficients. Our investigations indicate that the combined effect of the above uncertainties (which are certainly not the unique ones) result to extremely large bounds of potential inundation, thus rising many questions about the interpretation and usefulness of current flood

  10. Towards optimal allocation of computer resources: trade-offs between uncertainty, discretization and model reduction

    NASA Astrophysics Data System (ADS)

    Zinkhahn, M.; Leube, P.; Nowak, W.; de Barros, F. P. J.

    2012-04-01

    In recent years, there has been an increase in the computational complexity of hydro(geo)logical models. This has been driven by new problems addressing large-scale relationships like global warming, reactive transport on the catchment scale or CO2 sequestration. Computational model complexity becomes even more drastic, when facing the ubiquitous need for uncertainty quantification and risk assessment in the environmental sciences. Complexity can be broken down into contributions ranging from spatial, temporal and stochastic resolution, e.g., spatial grid resolution, time step size and number of repeated simulations dedicated to quantify uncertainty. Controlling these resolutions allows keeping the computational cost at a tractable level whilst guaranteeing accurate and robust predictions. Having this possibility at hand triggers our overall driving question: What is the optimal resolution for independent variables (i.e. time and space) to achieve reliable prediction in the presence of uncertainty? Can we determine an overall optimum combination of the number of realizations, spatial and temporal resolutions, needed for overall statistical/physical convergence of model predictions? If so, how can we find it? In other words, how can we optimally allocate available computational resources in order to achieve highest accuracy associated with a given prediction goal? In this work, we present an approach that allows to determine the compromise among different model dimensions (space, time, probability) when allocating computational resources. The overall goal is to maximize the prediction accuracy given limited computational resources. Our analysis is based on the idea to jointly consider the discretization errors and computational costs of all individual model dimensions. This yields a cost-to-error surface which serves to aid modelers in finding an optimal allocation of the computational resources. As a pragmatic way to proceed, we propose running small cost

  11. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    SciTech Connect

    Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  12. Uncertainty Management in Seismic Vulnerability Assessment Using Granular Computing Based on Covering of Universe

    NASA Astrophysics Data System (ADS)

    Khamespanah, F.; Delavar, M. R.; Zare, M.

    2013-05-01

    Earthquake is an abrupt displacement of the earth's crust caused by the discharge of strain collected along faults or by volcanic eruptions. Earthquake as a recurring natural cataclysm has always been a matter of concern in Tehran, capital of Iran, as a laying city on a number of known and unknown faults. Earthquakes can cause severe physical, psychological and financial damages. Consequently, some procedures should be developed to assist modelling the potential casualties and its spatial uncertainty. One of these procedures is production of seismic vulnerability maps to take preventive measures to mitigate corporeal and financial losses of future earthquakes. Since vulnerability assessment is a multi-criteria decision making problem depending on some parameters and expert's judgments, it undoubtedly is characterized by intrinsic uncertainties. In this study, it is attempted to use Granular computing (GrC) model based on covering of universe to handle the spatial uncertainty. Granular computing model concentrates on a general theory and methodology for problem solving as well as information processing by assuming multiple levels of granularity. Basic elements in granular computing are subsets, classes, and clusters of a universe called elements. In this research GrC is used for extracting classification rules based on seismic vulnerability with minimum entropy to handle uncertainty related to earthquake data. Tehran was selected as the study area. In our previous research, Granular computing model based on a partition model of universe was employed. The model has some kinds of limitations in defining similarity between elements of the universe and defining granules. In the model similarity between elements is defined based on an equivalence relation. According to this relation, two objects are similar based on some attributes, provided for each attribute the values of these objects are equal. In this research a general relation for defining similarity between

  13. Anthropometric approaches and their uncertainties to assigning computational phantoms to individual patients in pediatric dosimetry studies

    NASA Astrophysics Data System (ADS)

    Whalen, Scott; Lee, Choonsik; Williams, Jonathan L.; Bolch, Wesley E.

    2008-01-01

    Current efforts to reconstruct organ doses in children undergoing diagnostic imaging or therapeutic interventions using ionizing radiation typically rely upon the use of reference anthropomorphic computational phantoms coupled to Monte Carlo radiation transport codes. These phantoms are generally matched to individual patients based upon nearest age or sometimes total body mass. In this study, we explore alternative methods of phantom-to-patient matching with the goal of identifying those methods which yield the lowest residual errors in internal organ volumes. Various thoracic and abdominal organs were segmented and organ volumes obtained from chest-abdominal-pelvic (CAP) computed tomography (CT) image sets from 38 pediatric patients ranging in age from 2 months to 15 years. The organs segmented included the skeleton, heart, kidneys, liver, lungs and spleen. For each organ, least-squared regression lines, 95th percentile confidence intervals and 95th percentile prediction intervals were established as a function of patient age, trunk volume, estimated trunk mass, trunk height, and three estimates of the ventral body cavity volume based on trunk height alone, or in combination with circumferential, width and/or breadth measurements in the mid-chest of the patient. When matching phantom to patient based upon age, residual uncertainties in organ volumes ranged from 53% (lungs) to 33% (kidneys), and when trunk mass was used (surrogate for total body mass as we did not have images of patient head, arms or legs), these uncertainties ranged from 56% (spleen) to 32% (liver). When trunk height is used as the matching parameter, residual uncertainties in organ volumes were reduced to between 21 and 29% for all organs except the spleen (40%). In the case of the lungs and skeleton, the two-fold reduction in organ volume uncertainties was seen in moving from patient age to trunk height—a parameter easily measured in the clinic. When ventral body cavity volumes were used

  14. A computational model of limb impedance control based on principles of internal model uncertainty.

    PubMed

    Mitrovic, Djordje; Klanke, Stefan; Osu, Rieko; Kawato, Mitsuo; Vijayakumar, Sethu

    2010-10-26

    Efficient human motor control is characterized by an extensive use of joint impedance modulation, which is achieved by co-contracting antagonistic muscles in a way that is beneficial to the specific task. While there is much experimental evidence available that the nervous system employs such strategies, no generally-valid computational model of impedance control derived from first principles has been proposed so far. Here we develop a new impedance control model for antagonistic limb systems which is based on a minimization of uncertainties in the internal model predictions. In contrast to previously proposed models, our framework predicts a wide range of impedance control patterns, during stationary and adaptive tasks. This indicates that many well-known impedance control phenomena naturally emerge from the first principles of a stochastic optimization process that minimizes for internal model prediction uncertainties, along with energy and accuracy demands. The insights from this computational model could be used to interpret existing experimental impedance control data from the viewpoint of optimality or could even govern the design of future experiments based on principles of internal model uncertainty.

  15. Efficient Parameter Estimation and Uncertainty Quantification for Computationally Expensive Simulations with Subsurface and Hydrology Applications

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Singh, A.; Wang, Y.; Woodbury, J.

    2011-12-01

    Solving inverse problems for nonlinear simulation models with a nonlinear objective is usually a global optimization problem. This talk will discuss algorithms that employ response surfaces as a surrogate for an expensive simulation model or parallel computing to significantly reduce the computational time required to solve continuous global optimization problems and uncertainty analysis of simulation models that require a substantial amount of CPU time for each simulation. In order to reduce the number of simulations required, we are interested in utilizing information from all previous simulations done as part of an optimization search by building a (radial basis function) multivariate response surface that interpolates these earlier simulations. We will present examples of the application of these methods to significant environmental problems described by computationally intensive simulation models used worldwide including a large groundwater aquifer and a watershed model SWAT, which is used to describe potential pollution of NYC's drinking water. The models use site-specific data and the new algorithms are compared to well-known methods like PEST, SQP, and genetic algorithms. We will also describe an uncertainty analysis method SOARS that uses derivative-free optimization to help construct a response surface of the likelihood function to which Markov Chain Monte Carlo is applied. This approach has been shown to reduce CPU requirements to less than 1/10 of what is required by conventional MCMC uncertainty analysis. The computational methods described here are general and can be applied to a wide range of scientific and engineering problems described by nonlinear simulation models including those in the geosciences. Contact the senior author about open source software.

  16. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  17. Universal Quantum Computing:. Third Gen Prototyping Utilizing Relativistic `Trivector' R-Qubit Modeling Surmounting Uncertainty

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.; Kauffman, Louis H.; Giandinoto, Salvatore

    2013-09-01

    We postulate bulk universal quantum computing (QC) cannot be achieved without surmounting the quantum uncertainty principle, an inherent barrier by empirical definition in the regime described by the Copenhagen interpretation of quantum theory - the last remaining hurdle to bulk QC. To surmount uncertainty with probability 1, we redefine the basis for the qubit utilizing a unique form of M-Theoretic Calabi-Yau mirror symmetry cast in an LSXD Dirac covariant polarized vacuum with an inherent `Feynman synchronization backbone'. This also incorporates a relativistic qubit (r-qubit) providing additional degrees of freedom beyond the traditional Block 2-sphere qubit bringing the r-qubit into correspondence with our version of Relativistic Topological Quantum Field Theory (RTQFT). We present a 3rd generation prototype design for simplifying bulk QC implementation.

  18. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  19. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  20. Uncertainty studies of real anode surface area in computational analysis for molten salt electrorefining

    NASA Astrophysics Data System (ADS)

    Choi, Sungyeol; Park, Jaeyeong; Hoover, Robert O.; Phongikaroon, Supathorn; Simpson, Michael F.; Kim, Kwang-Rag; Hwang, Il Soon

    2011-09-01

    This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 h of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

  1. Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining

    SciTech Connect

    Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

    2011-09-01

    This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

  2. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    SciTech Connect

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  3. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  4. "I Guess My Question Is": What Is the Co-Occurrence of Uncertainty and Learning in Computer-Mediated Discourse?

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Cheng, An-Chih Janne; Schallert, Diane; Song, Kwangok; Lee, SoonAh; Park, Yangjoo

    2014-01-01

    The purpose of this study was to contribute to a better understanding of learning in computer-supported collaborative learning (CSCL) environments by investigating the co-occurrence of uncertainty expressions and expressions of learning in a graduate course in which students collaborated in classroom computer-mediated discussions. Results showed…

  5. Analysis of the CONRAD computational problems expressing only stochastic uncertainties: neutrons and protons.

    PubMed

    Gualdrini, G; Tanner, R J; Agosteo, S; Pola, A; Bedogni, R; Ferrari, P; Lacoste, V; Bordy, J-M; Chartier, J-L; de Carlan, L; Gomez Ros, J-M; Grosswendt, B; Kodeli, I; Price, R A; Rollet, S; Schultz, F; Siebert, B; Terrissol, M; Zankl, M

    2008-01-01

    Within the scope of CONRAD (A Coordinated Action for Radiation Dosimetry) Work Package 4 on Computational Dosimetry jointly collaborated with the other research actions on internal dosimetry, complex mixed radiation fields at workplaces and medical staff dosimetry. Besides these collaborative actions, WP4 promoted an international comparison on eight problems with their associated experimental data. A first set of three problems, the results of which are herewith summarised, dealt only with the expression of the stochastic uncertainties of the results: the analysis of the response function of a proton recoil telescope detector, the study of a Bonner sphere neutron spectrometer and the analysis of the neutron spectrum and dosimetric quantity H(p)(10) in a thermal neutron facility operated by IRSN Cadarache (the SIGMA facility). A second paper will summarise the results of the other five problems which dealt with the full uncertainty budget estimate. A third paper will present the results of a comparison on in vivo measurements of the (241)Am bone-seeker nuclide distributed in the knee. All the detailed papers will be presented in the WP4 Final Workshop Proceedings.

  6. Experimental evaluation of the uncertainty associated with the result of feature-of-size measurements through computed tomography

    NASA Astrophysics Data System (ADS)

    Fernandes, T. L.; Donatelli, G. D.; Baldo, C. R.

    2016-07-01

    Computed tomography for dimensional metrology has been introduced in quality control loop for about a decade. Due to the complex measurement-error cause system, generally no consistent measurement uncertainty reporting has been made. The ISO 15530-3 experimental approach, which makes use of calibrated parts, has been tested for estimating the uncertainty of CT-based measurements of features of size of a test object made of POM. Particular attention is given to the design of experiment and to the measurement uncertainty components. The most significant experimental findings are outlined and discussed in this paper.

  7. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  8. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  9. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  10. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    SciTech Connect

    Attanasi, Emil D. Coburn, Timothy C.

    2004-03-15

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis.

  11. Expressing Uncertainty in Computer-Mediated Discourse: Language as a Marker of Intellectual Work

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Schallert, Diane L.; Park, Yangjoo; Lee, SoonAh; Chiang, Yueh-hui Vanessa; Cheng, An-Chih Janne; Song, Kwangok; Chu, Hsiang-Ning Rebecca; Kim, Taehee; Lee, Haekyung

    2012-01-01

    Learning and dialogue may naturally engender feelings and expressions of uncertainty for a variety of reasons and purposes. Yet, little research has examined how patterns of linguistic uncertainty are enacted and changed over time as students reciprocally influence one another and the dialogical system they are creating. This study describes the…

  12. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  13. A methodology for computing uncertainty bounds of multivariable systems based on sector stability theory concepts

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1992-01-01

    The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.

  14. Computing continuous record of discharge with quantified uncertainty using index velocity observations: A probabilistic machine learning approach

    NASA Astrophysics Data System (ADS)

    Farahmand, Touraj; Hamilton, Stuart

    2016-04-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs). In general, the index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate and recommended when more than one specific discharge can be measured for a specific stage such as backwater and unsteady flow conditions caused by but not limited to the following; stream confluences, streams flowing into lakes or reservoirs, tide-affected streams, regulated streamflows (dams or control structures), or streams affected by meteorological forcing, such as strong prevailing winds. In existing index velocity modeling techniques, two models (ratings) are required; index velocity model and stage-area model. The outputs from each of these models, mean channel velocity (Vm) and cross-sectional area (A), are then multiplied together to compute a discharge. Mean channel velocity (Vm) can generally be determined by a multivariate regression parametric model such as linear regression in the simplest case. The main challenges in the existing index velocity modeling techniques are; 1) Preprocessing and QA/QC of continuous index velocity data and synchronizing them with discharge measurements. 2) Nonlinear relationship between mean velocity and index velocity which is not uncommon at monitoring locations. 3)Model exploration and analysis in order to find the optimal regression model predictor(s) and model type (linear vs nonlinear and if nonlinear number of the parameters). 3) Model changes caused by dynamical changes in the environment (geomorphic, biological) over time 5) Deployment of the final model into the Data Management Systems (DMS) for real-time discharge calculation 6) Objective estimation of uncertainty caused by: field measurement errors; structural uncertainty; parameter uncertainty; and continuous sensor data

  15. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  16. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  17. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  18. Real-time, mixed-mode computing architecture for waveform-resolved lidar systems with total propagated uncertainty

    NASA Astrophysics Data System (ADS)

    Ortman, Robert L.; Carr, Domenic A.; James, Ryan; Long, Daniel; O'Shaughnessy, Matthew R.; Valenta, Christopher R.; Tuell, Grady H.

    2016-05-01

    We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a "mixed-mode" architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer's user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.

  19. Mathematical and Computational Tools for Predictive Simulation of Complex Coupled Systems under Uncertainty

    SciTech Connect

    Ghanem, Roger

    2013-03-25

    Methods and algorithms are developed to enable the accurate analysis of problems that exhibit interacting physical processes with uncertainties. These uncertainties can pertain either to each of the physical processes or to the manner in which they depend on each others. These problems are cast within a polynomial chaos framework and their solution then involves either solving a large system of algebraic equations or a high dimensional numerical quadrature. In both cases, the curse of dimensionality is manifested. Procedures are developed for the efficient evaluation of the resulting linear equations that advantage of the block sparse structure of these equations, resulting in a block recursive Schur complement construction. In addition, embedded quadratures are constructed that permit the evaluation of very high-dimensional integrals using low-dimensional quadratures adapted to particular quantities of interest. The low-dimensional integration is carried out in a transformed measure space in which the quantity of interest is low-dimensional. Finally, a procedure is also developed to discover a low-dimensional manifold, embedded in the initial high-dimensional one, in which scalar quantities of interest exist. This approach permits the functional expression of the reduced space in terms of the original space, thus permitting cross-scale sensitivity analysis.

  20. Development of a Computational Framework for Stochastic Co-optimization of Water and Energy Resource Allocations under Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Mahinthakumar, K.; Arumugam, S.; DeCarolis, J.

    2015-12-01

    Owing to the lack of a consistent approach to assimilate probabilistic forecasts for water and energy systems, utilization of climate forecasts for conjunctive management of these two systems is very limited. Prognostic management of these two systems presents a stochastic co-optimization problem that seeks to determine reservoir releases and power allocation strategies while minimizing the expected operational costs subject to probabilistic climate forecast constraints. To address these issues, we propose a high performance computing (HPC) enabled computational framework for stochastic co-optimization of water and energy resource allocations under climate uncertainty. The computational framework embodies a new paradigm shift in which attributes of climate (e.g., precipitation, temperature) and its forecasted probability distribution are employed conjointly to inform seasonal water availability and electricity demand. The HPC enabled cyberinfrastructure framework is developed to perform detailed stochastic analyses, and to better quantify and reduce the uncertainties associated with water and power systems management by utilizing improved hydro-climatic forecasts. In this presentation, our stochastic multi-objective solver extended from Optimus (Optimization Methods for Universal Simulators), is introduced. The solver uses parallel cooperative multi-swarm method to solve for efficient solution of large-scale simulation-optimization problems on parallel supercomputers. The cyberinfrastructure harnesses HPC resources to perform intensive computations using ensemble forecast models of streamflow and power demand. The stochastic multi-objective particle swarm optimizer we developed is used to co-optimize water and power system models under constraints over a large number of ensembles. The framework sheds light on the application of climate forecasts and cyber-innovation framework to improve management and promote the sustainability of water and energy systems.

  1. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    SciTech Connect

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match quality scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.

  2. Computing quality scores and uncertainty for approximate pattern matching in geospatial semantic graphs

    DOE PAGES

    Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.

    2015-09-26

    Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less

  3. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  4. Can megavoltage computed tomography reduce proton range uncertainties in treatment plans for patients with large metal implants?

    NASA Astrophysics Data System (ADS)

    Newhauser, Wayne D.; Giebeler, Annelise; Langen, Katja M.; Mirkovic, Dragan; Mohan, Radhe

    2008-05-01

    Treatment planning calculations for proton therapy require an accurate knowledge of radiological path length, or range, to the distal edge of the target volume. In most cases, the range may be calculated with sufficient accuracy using kilovoltage (kV) computed tomography (CT) images. However, metal implants such as hip prostheses can cause severe streak artifacts that lead to large uncertainties in proton range. The purposes of this study were to quantify streak-related range errors and to determine if they could be avoided by using artifact-free megavoltage (MV) CT images in treatment planning. Proton treatment plans were prepared for a rigid, heterogeneous phantom and for a prostate cancer patient with a metal hip prosthesis using corrected and uncorrected kVCT images alone, uncorrected MVCT images and a combination of registered MVCT and kVCT images (the hybrid approach). Streak-induced range errors of 5-12 mm were present in the uncorrected kVCT-based patient plan. Correcting the streaks by manually assigning estimated true Hounsfield units improved the range accuracy. In a rigid heterogeneous phantom, the implant-related range uncertainty was estimated at <3 mm for both the corrected kVCT-based plan and the uncorrected MVCT-based plan. The hybrid planning approach yielded the best overall result. In this approach, the kVCT images provided good delineation of soft tissues due to high-contrast resolution, and the streak-free MVCT images provided smaller range uncertainties because they did not require artifact correction.

  5. Reducing annotation cost and uncertainty in computer-aided diagnosis through selective iterative classification

    NASA Astrophysics Data System (ADS)

    Riely, Amelia; Sablan, Kyle; Xiaotao, Thomas; Furst, Jacob; Raicu, Daniela

    2015-03-01

    Medical imaging technology has always provided radiologists with the opportunity to view and keep records of anatomy of the patient. With the development of machine learning and intelligent computing, these images can be used to create Computer-Aided Diagnosis (CAD) systems, which can assist radiologists in analyzing image data in various ways to provide better health care to patients. This paper looks at increasing accuracy and reducing cost in creating CAD systems, specifically in predicting the malignancy of lung nodules in the Lung Image Database Consortium (LIDC). Much of the cost in creating an accurate CAD system stems from the need for multiple radiologist diagnoses or annotations of each image, since there is rarely a ground truth diagnosis and even different radiologists' diagnoses of the same nodule often disagree. To resolve this issue, this paper outlines an method of selective iterative classification that predicts lung nodule malignancy by using multiple radiologist diagnoses only for cases that can benefit from them. Our method achieved 81% accuracy while costing only 46% of the method that indiscriminately used all annotations, which achieved a lower accuracy of 70%, while costing more.

  6. Numerical study of premixed HCCI engine combustion and its sensitivity to computational mesh and model uncertainties

    NASA Astrophysics Data System (ADS)

    Kong, Song-Charng; Reitz, Rolf D.

    2003-06-01

    This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.

  7. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a

  8. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  9. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  10. Uncertainties in radiative transfer computations: consequences on the ocean color products

    NASA Astrophysics Data System (ADS)

    Dilligeard, Eric; Zagolski, Francis; Fischer, Juergen; Santer, Richard P.

    2003-05-01

    Operational MERIS (MEdium Resolution Imaging Spectrometer) level-2 processing uses auxiliary data generated by two radiative transfer tools. These two codes simulate upwelling radiances within a coupled 'Atmosphere-Ocean' system, using different approaches based on the matrix-operator method (MOMO) and the successive orders (SO) technique. Intervalidation of these two radiative transfer codes was performed in order to implement them in the MERIS level-2 processing. MOMO and SO simulations were then conducted on a set of representative test cases. Results stressed both for all test cases good agreements were observed. The scattering processes are retrieved within a few tenths of a percent. Nevertheless, some substantial discrepancies occurred if the polarization is not taken into account mainly in the Rayleigh scattering computations. A preliminary study indicates that the impact of the code inaccuracy in the water leaving radiances retrieval (a level-2 MERIS product) is large, up to 50% in relative difference. Applying the OC2 algorithm, the effect on the retrieval chlorophyll concentration is less than 10%.

  11. Estimation of internal exposure to uranium with uncertainty from urinalysis data using the InDEP computer code.

    PubMed

    Anderson, Jeri L; Apostoaei, A Iulian; Thomas, Brian A

    2013-01-01

    The National Institute for Occupational Safety and Health (NIOSH) is currently studying mortality in a cohort of 6409 workers at a former uranium processing facility. As part of this study, over 220 000 urine samples were used to reconstruct organ doses due to internal exposure to uranium. Most of the available computational programs designed for analysis of bioassay data handle a single case at a time, and thus require a significant outlay of time and resources for the exposure assessment of a large cohort. NIOSH is currently supporting the development of a computer program, InDEP (Internal Dose Evaluation Program), to facilitate internal radiation exposure assessment as part of epidemiological studies of both uranium- and plutonium-exposed cohorts. A novel feature of InDEP is its batch processing capability which allows for the evaluation of multiple study subjects simultaneously. InDEP analyses bioassay data and derives intakes and organ doses with uncertainty estimates using least-squares regression techniques or using the Bayes' Theorem as applied to internal dosimetry (Bayesian method). This paper describes the application of the current version of InDEP to formulate assumptions about the characteristics of exposure at the study facility that were used in a detailed retrospective intake and organ dose assessment of the cohort.

  12. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task.

    PubMed

    Radell, Milen L; Myers, Catherine E; Beck, Kevin D; Moustafa, Ahmed A; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction.

  13. Deterministic uncertainty analysis

    SciTech Connect

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig.

  14. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task

    PubMed Central

    Radell, Milen L.; Myers, Catherine E.; Beck, Kevin D.; Moustafa, Ahmed A.; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction. PMID:27555829

  15. The Personality Trait of Intolerance to Uncertainty Affects Behavior in a Novel Computer-Based Conditioned Place Preference Task.

    PubMed

    Radell, Milen L; Myers, Catherine E; Beck, Kevin D; Moustafa, Ahmed A; Allen, Michael Todd

    2016-01-01

    Recent work has found that personality factors that confer vulnerability to addiction can also affect learning and economic decision making. One personality trait which has been implicated in vulnerability to addiction is intolerance to uncertainty (IU), i.e., a preference for familiar over unknown (possibly better) options. In animals, the motivation to obtain drugs is often assessed through conditioned place preference (CPP), which compares preference for contexts where drug reward was previously received. It is an open question whether participants with high IU also show heightened preference for previously rewarded contexts. To address this question, we developed a novel computer-based CPP task for humans in which participants guide an avatar through a paradigm in which one room contains frequent reward (i.e., rich) and one contains less frequent reward (i.e., poor). Following exposure to both contexts, subjects are assessed for preference to enter the previously rich and previously poor room. Individuals with low IU showed little bias to enter the previously rich room first, and instead entered both rooms at about the same rate which may indicate a foraging behavior. By contrast, those with high IU showed a strong bias to enter the previously rich room first. This suggests an increased tendency to chase reward in the intolerant group, consistent with previously observed behavior in opioid-addicted individuals. Thus, the personality factor of high IU may produce a pre-existing cognitive bias that provides a mechanism to promote decision-making processes that increase vulnerability to addiction. PMID:27555829

  16. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  17. Uncertainty and cognitive control.

    PubMed

    Mushtaq, Faisal; Bland, Amy R; Schaefer, Alexandre

    2011-01-01

    A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the "need for control"; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  18. THE USE OF COMPUTER MODELING PACKAGES TO ILLUSTRATE UNCERTAINTY IN RISK ASSESSMENTS: AN EASE OF USE AND INTERPRETATION COMPARISON

    EPA Science Inventory

    Consistent improvements in processor speed and computer access have substantially increased the use of computer modeling by experts and non-experts alike. Several new computer modeling packages operating under graphical operating systems (i.e. Microsoft Windows or Macintosh) m...

  19. Uncertainty analysis

    SciTech Connect

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  20. Mini-max feedback control as a computational theory of sensorimotor control in the presence of structural uncertainty.

    PubMed

    Ueyama, Yuki

    2014-01-01

    We propose a mini-max feedback control (MMFC) model as a robust approach to human motor control under conditions of uncertain dynamics, such as structural uncertainty. The MMFC model is an expansion of the optimal feedback control (OFC) model. According to this scheme, motor commands are generated to minimize the maximal cost, based on an assumption of worst-case uncertainty, characterized by familiarity with novel dynamics. We simulated linear dynamic systems with different types of force fields-stable and unstable dynamics-and compared the performance of MMFC to that of OFC. MMFC delivered better performance than OFC in terms of stability and the achievement of tasks. Moreover, the gain in positional feedback with the MMFC model in the unstable dynamics was tuned to the direction of instability. It is assumed that the shape modulations of the gain in positional feedback in unstable dynamics played the same role as that played by end-point stiffness observed in human studies. Accordingly, we suggest that MMFC is a plausible model that predicts motor behavior under conditions of uncertain dynamics.

  1. Mini-max feedback control as a computational theory of sensorimotor control in the presence of structural uncertainty

    PubMed Central

    Ueyama, Yuki

    2014-01-01

    We propose a mini-max feedback control (MMFC) model as a robust approach to human motor control under conditions of uncertain dynamics, such as structural uncertainty. The MMFC model is an expansion of the optimal feedback control (OFC) model. According to this scheme, motor commands are generated to minimize the maximal cost, based on an assumption of worst-case uncertainty, characterized by familiarity with novel dynamics. We simulated linear dynamic systems with different types of force fields–stable and unstable dynamics–and compared the performance of MMFC to that of OFC. MMFC delivered better performance than OFC in terms of stability and the achievement of tasks. Moreover, the gain in positional feedback with the MMFC model in the unstable dynamics was tuned to the direction of instability. It is assumed that the shape modulations of the gain in positional feedback in unstable dynamics played the same role as that played by end-point stiffness observed in human studies. Accordingly, we suggest that MMFC is a plausible model that predicts motor behavior under conditions of uncertain dynamics. PMID:25309415

  2. Measurement uncertainty.

    PubMed

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  3. Exploring the Impact of Nuclear Data Uncertainties in Ultra-high Resolution Gamma Spectroscopy for Isotopic Analysis Using Approximate Bayesian Computation

    SciTech Connect

    Burr, T.; Hoover, A.; Croft, S.; Rabin, M.

    2015-01-15

    High purity germanium (HPGe) currently provides the highest readily available resolution gamma detection for a broad range of radiation measurements, but microcalorimetry is a developing option that has considerably higher resolution even than HPGe. Superior microcalorimetry resolution offers the potential to better distinguish closely spaced X-rays and gamma-rays, a common challenge for the low energy spectral region near 100 keV from special nuclear materials, and the higher signal-to-background ratio also confers an advantage in detection limit. As microcalorimetry continues to develop, it is timely to assess the impact of uncertainties in detector and item response functions and in basic nuclear data, such as branching ratios and half-lives, used to interpret spectra in terms of the contributory radioactive isotopes. We illustrate that a new inference option known as approximate Bayesian computation (ABC) is effective and convenient both for isotopic inference and for uncertainty quantification for microcalorimetry. The ABC approach opens a pathway to new and more powerful implementations for practical applications than currently available.

  4. Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  5. An alternative approach to deal with geometric uncertainties in computer analysis of two-dimensional electrophoresis gels.

    PubMed

    Kriegel, K; Seefeldt, I; Hoffmann, F; Schultz, C; Wenk, C; Regitz-Zagrosek, V; Oswald, H; Fleck, E

    2000-07-01

    With the growing importance of proteomics in biomedical and pharmaceutical sciences a need has emerged for computing tools that are capable of digitally visualizing and analyzing protein spot patterns within two-dimensional electrophoresis (2-DE) gel. Matching programs need to meet requirements such as interlaboratory comparison and the comparison of samples from different origins. For such research purposes, we have developed the CAROL system that implements new algorithms for spot detection and matching, which enable researchers to take a different approach to protein spot identification and comparison. The present short communication discusses how the system deals with uncertain geometric spot information that arises from streaks and complex spot regions and how this can be amplified for the matching procedure.

  6. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    SciTech Connect

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  7. Conundrums with uncertainty factors.

    PubMed

    Cooke, Roger

    2010-03-01

    The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767

  8. PIV uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Sciacchitano, Andrea; Wieneke, Bernhard

    2016-08-01

    This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

  9. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  10. Visualization of Uncertainty

    NASA Astrophysics Data System (ADS)

    Jones, P. W.; Strelitz, R. A.

    2012-12-01

    The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs

  11. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  12. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  13. Quantification of uncertainty in geochemical reactions

    NASA Astrophysics Data System (ADS)

    Srinivasan, Gowri; Tartakovsky, Daniel M.; Robinson, Bruce A.; Aceves, Alejandro B.

    2007-12-01

    Predictions of reactive transport in the subsurface are routinely compromised by both model (structural) and parametric uncertainties. We present a set of computational tools for quantifying these two types of uncertainties. The model uncertainty is resolved at the molecular scale where epistemic uncertainty incorporates aleatory uncertainty. The parametric uncertainty is resolved at both molecular and continuum (Darcy) scales. We use the proposed approach to quantify uncertainty in modeling the sorption of neptunium through a competitive ion exchange. This radionuclide is of major concern for various high-level waste storage projects because of its relatively long half-life and its high-solubility and low-sorption properties. We demonstrate how parametric and model uncertainties affect one's ability to estimate the distribution coefficient. The uncertainty quantification tools yield complete probabilistic descriptions of key parameters affecting the fate and migration of neptunium in the subsurface rather than the lower statistical moments. This is important, since these distributions are highly skewed.

  14. Uncertainty-induced quantum nonlocality

    NASA Astrophysics Data System (ADS)

    Wu, Shao-xiong; Zhang, Jun; Yu, Chang-shui; Song, He-shan

    2014-01-01

    Based on the skew information, we present a quantity, uncertainty-induced quantum nonlocality (UIN) to measure the quantum correlation. It can be considered as the updated version of the original measurement-induced nonlocality (MIN) preserving the good computability but eliminating the non-contractivity problem. For 2×d-dimensional state, it is shown that UIN can be given by a closed form. In addition, we also investigate the maximal uncertainty-induced nonlocality.

  15. Uncertainty in Air Quality Modeling.

    NASA Astrophysics Data System (ADS)

    Fox, Douglas G.

    1984-01-01

    Under the direction of the AMS Steering Committee for the EPA Cooperative Agreement on Air Quality Modeling, a small group of scientists convened to consider the question of uncertainty in air quality modeling. Because the group was particularly concerned with the regulatory use of models, its discussion focused on modeling tall stack, point source emissions.The group agreed that air quality model results should be viewed as containing both reducible error and inherent uncertainty. Reducible error results from improper or inadequate meteorological and air quality data inputs, and from inadequacies in the models. Inherent uncertainty results from the basic stochastic nature of the turbulent atmospheric motions that are responsible for transport and diffusion of released materials. Modelers should acknowledge that all their predictions to date contain some associated uncertainty and strive also to quantify uncertainty.How can the uncertainty be quantified? There was no consensus from the group as to precisely how uncertainty should be calculated. One subgroup, which addressed statistical procedures, suggested that uncertainty information could be obtained from comparisons of observations and predictions. Following recommendations from a previous AMS workshop on performance evaluation (Fox. 1981), the subgroup suggested construction of probability distribution functions from the differences between observations and predictions. Further, they recommended that relatively new computer-intensive statistical procedures be considered to improve the quality of uncertainty estimates for the extreme value statistics of interest in regulatory applications.A second subgroup, which addressed the basic nature of uncertainty in a stochastic system, also recommended that uncertainty be quantified by consideration of the differences between observations and predictions. They suggested that the average of the difference squared was appropriate to isolate the inherent uncertainty that

  16. Uncertainty analysis of minimum vessel liquid inventory during a small-break LOCA in a B W Plant: An application of the CSAU methodology using the RELAP5/MOD3 computer code

    SciTech Connect

    Ortiz, M G; Ghan, L S

    1992-12-01

    The Nuclear Regulatory Commission (NRC) revised the emergency core cooling system licensing rule to allow the use of best estimate computer codes, provided the uncertainty of the calculations are quantified and used in the licensing and regulation process. The NRC developed a generic methodology called Code Scaling, Applicability, and Uncertainty (CSAU) to evaluate best estimate code uncertainties. The objective of this work was to adapt and demonstrate the CSAU methodology for a small-break loss-of-coolant accident (SBLOCA) in a Pressurized Water Reactor of Babcock Wilcox Company lowered loop design using RELAP5/MOD3 as the simulation tool. The CSAU methodology was successfully demonstrated for the new set of variants defined in this project (scenario, plant design, code). However, the robustness of the reactor design to this SBLOCA scenario limits the applicability of the specific results to other plants or scenarios. Several aspects of the code were not exercised because the conditions of the transient never reached enough severity. The plant operator proved to be a determining factor in the course of the transient scenario, and steps were taken to include the operator in the model, simulation, and analyses.

  17. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  18. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  19. Uncertainty of empirical correlation equations

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.

    2016-08-01

    The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.

  20. Uncertainty and Engagement with Learning Games

    ERIC Educational Resources Information Center

    Howard-Jones, Paul A.; Demetriou, Skevi

    2009-01-01

    Uncertainty may be an important component of the motivation provided by learning games, especially when associated with gaming rather than learning. Three studies are reported that explore the influence of gaming uncertainty on engagement with computer-based learning games. In the first study, children (10-11 years) played a simple maths quiz.…

  1. Communication and Uncertainty Management.

    ERIC Educational Resources Information Center

    Brashers, Dale E.

    2001-01-01

    Suggests the fundamental challenge for refining theories of communication and uncertainty is to abandon the assumption that uncertainty will produce anxiety. Outlines and extends a theory of uncertainty management and reviews current theory and research. Concludes that people want to reduce uncertainty because it is threatening, but uncertainty…

  2. Dyke leakage localization and hydraulic permeability estimation through self-potential and hydro-acoustic measurements: Self-potential 'abacus' diagram for hydraulic permeability estimation and uncertainty computation

    NASA Astrophysics Data System (ADS)

    Bolève, A.; Vandemeulebrouck, J.; Grangeon, J.

    2012-11-01

    In the present study, we propose the combination of two geophysical techniques, which we have applied to a dyke located in southeastern France that has a visible downstream flood area: the self-potential (SP) and hydro-acoustic methods. These methods are sensitive to two different types of signals: electric signals and water-soil pressure disturbances, respectively. The advantages of the SP technique lie in the high rate of data acquisition, which allows assessment of long dykes, and direct diagnosis in terms of leakage area delimitation and quantification. Coupled with punctual hydro-acoustic cartography, a leakage position can be precisely located, therefore allowing specific remediation decisions with regard to the results of the geophysical investigation. Here, the precise localization of leakage from an earth dyke has been identified using SP and hydro-acoustic signals, with the permeability of the preferential fluid flow area estimated by forward SP modeling. Moreover, we propose a general 'abacus' diagram for the estimation of hydraulic permeability of dyke leakage according to the magnitude of over water SP anomalies and the associated uncertainty.

  3. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi

  4. Uncertainty as knowledge

    PubMed Central

    Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.

    2015-01-01

    This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108

  5. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  6. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  7. Credible Software and Simulation Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; Nixon, David (Technical Monitor)

    1998-01-01

    The utility of software primarily depends on its reliability and performance; whereas, its significance depends solely on its credibility for intended use. The credibility of simulations confirms the credibility of software. The level of veracity and the level of validity of simulations determine the degree of credibility of simulations. The process of assessing this credibility in fields such as computational mechanics (CM) differs from that followed by the Defense Modeling and Simulation Office in operations research. Verification and validation (V&V) of CM simulations is not the same as V&V of CM software. Uncertainty is the measure of simulation credibility. Designers who use software are concerned with management of simulation uncertainty. Terminology and concepts are presented with a few examples from computational fluid dynamics.

  8. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  9. Fission Spectrum Related Uncertainties

    SciTech Connect

    G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores

    2007-10-01

    The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.

  10. Uncertainty Propagation for Terrestrial Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas

    2016-06-01

    Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.

  11. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  12. Remediation of heterogeneous aquifers subject to uncertainty.

    PubMed

    Ricciardi, K L

    2009-01-01

    Optimal cost pump-and-treat ground water remediation designs for containment of a contaminated aquifer are often developed using deterministic ground water models to predict ground water flow. Uncertainty in hydraulic conductivity fields used in these models results in remediation designs that are unreliable. The degree to which uncertainty contributes to the reliability of remediation designs as measured by the characterization of the uncertainty is shown to differ depending upon the geologic environments of the models. This conclusion is drawn from the optimal design costs for multiple deterministic models generated to represent the uncertainty of four distinct models with different geologic environments. A multi scenario approach that includes uncertainty into the remediation design called the deterministic method for optimization subject to uncertainty (DMOU) is applied to these distinct models. It is found that the DMOU is a method for determining a remediation design subject to uncertainty that requires minimal postprocessing efforts. Preprocessing, however, is required for the application of the DMOU to unique problems. In the ground water remediation design problems, the orientation of geologic facies with respect to the orientation of flow patterns, pumping well locations, and constraint locations are shown to affect the preprocessing, the solutions to the DMOU problems, and the computational efficiency of the DMOU approach. The results of the DMOU are compared to the results of a statistical analysis of the effects of the uncertainty on remediation designs. This comparison validates the efficacy of the DMOU and illustrates the computational advantages to using the DMOU over statistical measures.

  13. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM. OPERATIONAL MANUAL.

    EPA Science Inventory

    MOUSE (Modular Oriented Uncertainty SystEm) deals with the problem of uncertainties in models that consist of one or more algebraic equations. It was especially designed for use by those with little or no knowledge of computer languages or programming. It is compact (and thus can...

  14. Analysis of the Uncertainty in the Computation of Receiver Functions and Improvement in the Estimation of Receiver, PP and SS functions

    NASA Astrophysics Data System (ADS)

    Huang, X.; Gurrola, H.

    2013-12-01

    methods. All of these methods performed well in terms of stdev but we chose ARU for its high quality data and low signal to noise ratios (the average S/N ratio for these data were 4%). With real data, we tend to assume the method that has the lowest stdev is the best. But stdev does not account for a systematic bias toward incorrect values. In this case the LSD once again had the lowest stdev in computed amplitudes of Pds phases but it had the smallest values. But the FID, FWLD and MID tended to produce the largest amplitude while the LSD and TID tended toward the lower amplitudes. Considering that in the synthetics all these methods showed bias toward low amplitude, we believe that with real data those methods producing the largest amplitudes will be closest to the 'true values' and that is a better measure of the better method than a small stdev in amplitude estimates. We will also present results for applying TID and FID methods to the production of PP and SS precursor functions. When applied to these data, it is possible to moveout correct the cross-correlation functions before extracting the signal from each PdP (or SdS) phase in these data. As a result a much cleaner Earth function is produced and feequency content is significantly improved.

  15. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  16. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  17. [Ethics, empiricism and uncertainty].

    PubMed

    Porz, R; Zimmermann, H; Exadaktylos, A K

    2011-01-01

    Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine.

  18. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  19. Impact of uncertainty on modeling and testing

    NASA Technical Reports Server (NTRS)

    Coleman, Hugh W.; Brown, Kendall K.

    1995-01-01

    A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.

  20. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    SciTech Connect

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an

  1. Predictive uncertainty in auditory sequence processing.

    PubMed

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  2. Predictive uncertainty in auditory sequence processing.

    PubMed

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  3. Economic uncertainty and econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, Christophe

    2009-10-01

    The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.

  4. Physical Uncertainty Bounds (PUB)

    SciTech Connect

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  5. Intolerance of Uncertainty

    PubMed Central

    Beier, Meghan L.

    2015-01-01

    Multiple sclerosis (MS) is a chronic and progressive neurologic condition that, by its nature, carries uncertainty as a hallmark characteristic. Although all patients face uncertainty, there is variability in how individuals cope with its presence. In other populations, the concept of “intolerance of uncertainty” has been conceptualized to explain this variability such that individuals who have difficulty tolerating the possibility of future occurrences may engage in thoughts or behaviors by which they attempt to exert control over that possibility or lessen the uncertainty but may, as a result, experience worse outcomes, particularly in terms of psychological well-being. This topical review introduces MS-focused researchers, clinicians, and patients to intolerance of uncertainty, integrates the concept with what is already understood about coping with MS, and suggests future steps for conceptual, assessment, and treatment-focused research that may benefit from integrating intolerance of uncertainty as a central feature. PMID:26300700

  6. Evaluation of uncertainty in large-scale fusion metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Fumin; Qu, Xinghua; Wu, Hongyan; Ye, Shenghua

    2008-12-01

    The expression system of uncertainty in conventional scale has been perfect, however, due to varies of error sources, it is still hard to obtain the uncertainty of large-scale instruments by common methods. In this paper, the uncertainty is evaluated by Monte Carlo simulation. The point-clouds created by this method are shown through computer visualization and point by point analysis is made. Thus, in fusion measurement, apart from the uncertainty of every instrument being expressed directly, the contribution every error source making for the whole uncertainty becomes easy to calculate. Finally, the application of this method in measuring tunnel component is given.

  7. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  8. Uncertainty propagation in an ecosystem nutrient budget.

    PubMed

    Lehrter, John C; Cebrian, Just

    2010-03-01

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated uncertainty for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of freedom. New aspects include the combined use of Monte Carlo simulations with classical error propagation methods, uncertainty analyses for GIS computations, and uncertainty propagation involving literature and subjective estimates of terms used in the budget calculations. The methods employed are broadly applicable to the mathematical operations employed in ecological studies involving step-by-step calculations, scaling procedures, and calculations of variables from direct measurements and/or literature estimates. Propagation of the standard error and the degrees of freedom allowed for calculation of the uncertainty intervals around every term in the budget. For scientists and environmental managers, the methods developed herein provide a relatively simple framework to propagate and assess the contributions of uncertainty in directly measured and literature estimated variables to calculated variables. Application of these methods to environmental data used in scientific reporting and environmental management will improve the interpretation of data and simplify the estimation of risk associated with decisions based on ecological studies.

  9. Information-theoretic approach to uncertainty importance

    SciTech Connect

    Park, C.K.; Bari, R.A.

    1985-01-01

    A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results.

  10. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  11. Predictive uncertainty in auditory sequence processing

    PubMed Central

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  12. Communicating scientific uncertainty

    PubMed Central

    Fischhoff, Baruch; Davis, Alex L.

    2014-01-01

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390

  13. Communicating scientific uncertainty.

    PubMed

    Fischhoff, Baruch; Davis, Alex L

    2014-09-16

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science.

  14. Evaluating prediction uncertainty

    SciTech Connect

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.

  15. Uncertainty relations and precession of perihelion

    NASA Astrophysics Data System (ADS)

    Scardigli, Fabio; Casadio, Roberto

    2016-03-01

    We compute the corrections to the Schwarzschild metric necessary to reproduce the Hawking temperature derived from a Generalized Uncertainty Principle (GUP), so that the GUP deformation parameter is directly linked to the deformation of the metric. Using this modified Schwarzschild metric, we compute corrections to the standard General Relativistic predictions for the perihelion precession for planets in the solar system, and for binary pulsars. This analysis allows us to set bounds for the GUP deformation parameter from well-known astronomical measurements.

  16. Dasymetric Modeling and Uncertainty

    PubMed Central

    Nagle, Nicholas N.; Buttenfield, Barbara P.; Leyk, Stefan; Speilman, Seth

    2014-01-01

    Dasymetric models increase the spatial resolution of population data by incorporating related ancillary data layers. The role of uncertainty in dasymetric modeling has not been fully addressed as of yet. Uncertainty is usually present because most population data are themselves uncertain, and/or the geographic processes that connect population and the ancillary data layers are not precisely known. A new dasymetric methodology - the Penalized Maximum Entropy Dasymetric Model (P-MEDM) - is presented that enables these sources of uncertainty to be represented and modeled. The P-MEDM propagates uncertainty through the model and yields fine-resolution population estimates with associated measures of uncertainty. This methodology contains a number of other benefits of theoretical and practical interest. In dasymetric modeling, researchers often struggle with identifying a relationship between population and ancillary data layers. The PEDM model simplifies this step by unifying how ancillary data are included. The P-MEDM also allows a rich array of data to be included, with disparate spatial resolutions, attribute resolutions, and uncertainties. While the P-MEDM does not necessarily produce more precise estimates than do existing approaches, it does help to unify how data enter the dasymetric model, it increases the types of data that may be used, and it allows geographers to characterize the quality of their dasymetric estimates. We present an application of the P-MEDM that includes household-level survey data combined with higher spatial resolution data such as from census tracts, block groups, and land cover classifications. PMID:25067846

  17. Uncertainty in QSAR predictions.

    PubMed

    Sahlin, Ullrika

    2013-03-01

    It is relevant to consider uncertainty in individual predictions when quantitative structure-activity (or property) relationships (QSARs) are used to support decisions of high societal concern. Successful communication of uncertainty in the integration of QSARs in chemical safety assessment under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system can be facilitated by a common understanding of how to define, characterise, assess and evaluate uncertainty in QSAR predictions. A QSAR prediction is, compared to experimental estimates, subject to added uncertainty that comes from the use of a model instead of empirically-based estimates. A framework is provided to aid the distinction between different types of uncertainty in a QSAR prediction: quantitative, i.e. for regressions related to the error in a prediction and characterised by a predictive distribution; and qualitative, by expressing our confidence in the model for predicting a particular compound based on a quantitative measure of predictive reliability. It is possible to assess a quantitative (i.e. probabilistic) predictive distribution, given the supervised learning algorithm, the underlying QSAR data, a probability model for uncertainty and a statistical principle for inference. The integration of QSARs into risk assessment may be facilitated by the inclusion of the assessment of predictive error and predictive reliability into the "unambiguous algorithm", as outlined in the second OECD principle.

  18. Uncertainty Analysis for a Jet Flap Airfoil

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Cruz, Josue

    2006-01-01

    An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.

  19. Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.

  20. Uncertainty estimates for theoretical atomic and molecular data

    NASA Astrophysics Data System (ADS)

    Chung, H.-K.; Braams, B. J.; Bartschat, K.; Császár, A. G.; Drake, G. W. F.; Kirchner, T.; Kokoouline, V.; Tennyson, J.

    2016-09-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering.

  1. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    PubMed

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  2. Classification images with uncertainty

    PubMed Central

    Tjan, Bosco S.; Nandy, Anirvan S.

    2009-01-01

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477

  3. Classification images with uncertainty.

    PubMed

    Tjan, Bosco S; Nandy, Anirvan S

    2006-04-04

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a "haze" that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms.

  4. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  5. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a

  6. Interpreting uncertainty terms.

    PubMed

    Holtgraves, Thomas

    2014-08-01

    Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.

  7. Measurement uncertainty relations

    SciTech Connect

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  8. Serenity in political uncertainty.

    PubMed

    Doumit, Rita; Afifi, Rema A; Devon, Holli A

    2015-01-01

    College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930

  9. Serenity in political uncertainty.

    PubMed

    Doumit, Rita; Afifi, Rema A; Devon, Holli A

    2015-01-01

    College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence.

  10. Uncertainty and calibration analysis

    SciTech Connect

    Coutts, D.A.

    1991-03-01

    All measurements contain some deviation from the true value which is being measured. In the common vernacular this deviation between the true value and the measured value is called an inaccuracy, an error, or a mistake. Since all measurements contain errors, it is necessary to accept that there is a limit to how accurate a measurement can be. The undertainty interval combined with the confidence level, is one measure of the accuracy for a measurement or value. Without a statement of uncertainty (or a similar parameter) it is not possible to evaluate if the accuracy of the measurement, or data, is appropriate. The preparation of technical reports, calibration evaluations, and design calculations should consider the accuracy of measurements and data being used. There are many methods to accomplish this. This report provides a consistent method for the handling of measurement tolerances, calibration evaluations and uncertainty calculations. The SRS Quality Assurance (QA) Program requires that the uncertainty of technical data and instrument calibrations be acknowledged and estimated. The QA Program makes some specific technical requirements related to the subject but does not provide a philosophy or method on how uncertainty should be estimated. This report was prepared to provide a technical basis to support the calculation of uncertainties and the calibration of measurement and test equipment for any activity within the Experimental Thermal-Hydraulics (ETH) Group. The methods proposed in this report provide a graded approach for estimating the uncertainty of measurements, data, and calibrations. The method is based on the national consensus standard, ANSI/ASME PTC 19.1.

  11. Weighted Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming

    2016-01-01

    Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295

  12. The legacy of uncertainty

    NASA Technical Reports Server (NTRS)

    Brown, Laurie M.

    1993-01-01

    An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.

  13. Quantification and Propagation of Nuclear Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Rising, Michael E.

    The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output

  14. Asymptotic entropic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Adamczak, Radosław; Latała, Rafał; Puchała, Zbigniew; Życzkowski, Karol

    2016-03-01

    We analyze entropic uncertainty relations for two orthogonal measurements on a N-dimensional Hilbert space, performed in two generic bases. It is assumed that the unitary matrix U relating both bases is distributed according to the Haar measure on the unitary group. We provide lower bounds on the average Shannon entropy of probability distributions related to both measurements. The bounds are stronger than those obtained with use of the entropic uncertainty relation by Maassen and Uffink, and they are optimal up to additive constants. We also analyze the case of a large number of measurements and obtain strong entropic uncertainty relations, which hold with high probability with respect to the random choice of bases. The lower bounds we obtain are optimal up to additive constants and allow us to prove a conjecture by Wehner and Winter on the asymptotic behavior of constants in entropic uncertainty relations as the dimension tends to infinity. As a tool we develop estimates on the maximum operator norm of a submatrix of a fixed size of a random unitary matrix distributed according to the Haar measure, which are of independent interest.

  15. Uncertainties in repository modeling

    SciTech Connect

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  16. Reciprocity and uncertainty.

    PubMed

    Bereby-Meyer, Yoella

    2012-02-01

    Guala points to a discrepancy between strong negative reciprocity observed in the lab and the way cooperation is sustained "in the wild." This commentary suggests that in lab experiments, strong negative reciprocity is limited when uncertainty exists regarding the players' actions and the intentions. Thus, costly punishment is indeed a limited mechanism for sustaining cooperation in an uncertain environment.

  17. Reciprocity and uncertainty.

    PubMed

    Bereby-Meyer, Yoella

    2012-02-01

    Guala points to a discrepancy between strong negative reciprocity observed in the lab and the way cooperation is sustained "in the wild." This commentary suggests that in lab experiments, strong negative reciprocity is limited when uncertainty exists regarding the players' actions and the intentions. Thus, costly punishment is indeed a limited mechanism for sustaining cooperation in an uncertain environment. PMID:22289307

  18. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    SciTech Connect

    Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  19. Comment on "A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies" (L. Eça and M. Hoekstra, Journal of Computational Physics 262 (2014) 104-130)

    NASA Astrophysics Data System (ADS)

    Xing, Tao; Stern, Frederick

    2015-11-01

    Eça and Hoekstra [1] proposed a procedure for the estimation of the numerical uncertainty of CFD calculations based on the least squares root (LSR) method. We believe that the LSR method has potential value for providing an extended Richardson-extrapolation solution verification procedure for mixed monotonic and oscillatory or only oscillatory convergent solutions (based on the usual systematic grid-triplet convergence condition R). Current Richardson-extrapolation solution verification procedures [2-7] are restricted to monotonic convergent solutions 0 < R < 1. Procedures for oscillatory convergence simply either use uncertainty estimate based on average maximum minus minimum solutions [8,9] or arbitrarily large factors of safety (FS) [2]. However, in our opinion several issues preclude the usefulness of the presented LSR method: five criticisms follow. The solution verification literature needs technical discussion in order to put the LSR method in context. The LSR method has many options making it very difficult to follow. Fig. 1 provides a block diagram, which summarizes the LSR procedure and options, including some of which we are in disagreement. Compared to the grid-triplet and three-step procedure followed by most solution verification methods (convergence condition followed by error and uncertainty estimates), the LSR method follows a four-grid (minimum) and four-step procedure (error estimate, data range parameter Δϕ, FS, and uncertainty estimate).

  20. Strategy under uncertainty.

    PubMed

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  1. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  2. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  3. A generalized a priori dose uncertainty model of IMRT delivery.

    PubMed

    Jin, Hosang; Palta, Jatinder; Suh, Tae-Suk; Kim, Siyong

    2008-03-01

    Multileaf collimator-based intensity modulated radiation therapy (IMRT) is complex because each intensity modulated field consists of hundreds of subfields, each of which is associated with an intricate interplay of uncertainties. In this study, the authors have revised the previously introduced uncertainty model to provide an a priori accurate prediction of dose uncertainty during treatment planning in IMRT. In the previous model, the dose uncertainties were categorized into space-oriented dose uncertainty (SOU) and nonspace-oriented dose uncertainty (NOU). The revised model further divided the uncertainty sources into planning and delivery. SOU and NOU associated with a planning system were defined as inherent dose uncertainty. A convolution method with seven degrees of freedom was also newly applied to generalize the model for practical clinical cases. The model parameters were quantified through a set of measurements, accumulated routine quality assurance (QA) data, and peer-reviewed publications. The predicted uncertainty maps were compared with dose difference distributions between computations and 108 simple open-field measurements using a two-dimensional diode array detector to verify the validity of the model parameters and robustness of the generalized model. To examine the applicability of the model to overall dose uncertainty prediction in IMRT, a retrospective analysis of QA measurements using the diode array detector for 32 clinical IM fields was also performed. A scatter diagram and a correlation coefficient were employed to investigate a correlation of the predicted dose uncertainty distribution with the dose discrepancy distribution between calculation and delivery. In addition, a gamma test was performed to correlate failed regions in dose verification with the dose uncertainty map. The quantified model parameters well correlated the predicted dose uncertainty with the probable dose difference between calculations and measurements. It was visually

  4. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  5. Asymmetric Uncertainty Expression for High Gradient Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T

    2012-01-01

    When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.

  6. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  7. Majorization entropic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Puchała, Zbigniew; Rudnicki, Łukasz; Życzkowski, Karol

    2013-07-01

    Entropic uncertainty relations in a finite-dimensional Hilbert space are investigated. Making use of the majorization technique we derive explicit lower bounds for the sum of Rényi entropies describing probability distributions associated with a given pure state expanded in eigenbases of two observables. Obtained bounds are expressed in terms of the largest singular values of submatrices of the unitary rotation matrix. Numerical simulations show that for a generic unitary matrix of size N = 5, our bound is stronger than the well-known result of Maassen and Uffink (MU) with a probability larger than 98%. We also show that the bounds investigated are invariant under the dephasing and permutation operations. Finally, we derive a classical analogue of the MU uncertainty relation, which is formulated for stochastic transition matrices. Dedicated to Iwo Białynicki-Birula on the occasion of his 80th birthday.

  8. Uncertainties in transpiration estimates.

    PubMed

    Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G

    2014-02-13

    arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).

  9. Radar stage uncertainty

    USGS Publications Warehouse

    Fulford, J.M.; Davies, W.J.

    2005-01-01

    The U.S. Geological Survey is investigating the performance of radars used for stage (or water-level) measurement. This paper presents a comparison of estimated uncertainties and data for radar water-level measurements with float, bubbler, and wire weight water-level measurements. The radar sensor was also temperature-tested in a laboratory. The uncertainty estimates indicate that radar measurements are more accurate than uncorrected pressure sensors at higher water stages, but are less accurate than pressure sensors at low stages. Field data at two sites indicate that radar sensors may have a small negative bias. Comparison of field radar measurements with wire weight measurements found that the radar tends to measure slightly lower values as stage increases. Copyright ASCE 2005.

  10. Uncertainties in climate stabilization

    SciTech Connect

    Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.

    2009-11-01

    We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.

  11. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range

  12. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  13. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in

  14. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  15. Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review

    NASA Technical Reports Server (NTRS)

    Tripp, John S.

    1999-01-01

    This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.

  16. Collective uncertainty entanglement test.

    PubMed

    Rudnicki, Łukasz; Horodecki, Paweł; Zyczkowski, Karol

    2011-10-01

    For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.

  17. Collective Uncertainty Entanglement Test

    NASA Astrophysics Data System (ADS)

    Rudnicki, Łukasz; Horodecki, Paweł; Życzkowski, Karol

    2011-10-01

    For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.

  18. Schwarzschild mass uncertainty

    NASA Astrophysics Data System (ADS)

    Davidson, Aharon; Yellin, Ben

    2014-02-01

    Applying Dirac's procedure to -dependent constrained systems, we derive a reduced total Hamiltonian, resembling an upside down harmonic oscillator, which generates the Schwarzschild solution in the mini super-spacetime. Associated with the now -dependent Schrodinger equation is a tower of localized Guth-Pi-Barton wave packets, orthonormal and non-singular, admitting equally spaced average-`energy' levels. Our approach is characterized by a universal quantum mechanical uncertainty structure which enters the game already at the flat spacetime level, and accompanies the massive Schwarzschild sector for any arbitrary mean mass. The average black hole horizon surface area is linearly quantized.

  19. Fundamental "Uncertainty" in Science

    NASA Astrophysics Data System (ADS)

    Reichl, Linda E.

    The conference on "Uncertainty and Surprise" was concerned with our fundamental inability to predict future events. How can we restructure organizations to effectively function in an uncertain environment? One concern is that many large complex organizations are built on mechanical models, but mechanical models cannot always respond well to "surprises." An underlying assumption a bout mechanical models is that, if we give them enough information about the world, they will know the future accurately enough that there will be few or no surprises. The assumption is that the future is basically predictable and deterministic.

  20. Satellite altitude determination uncertainties

    NASA Technical Reports Server (NTRS)

    Siry, J. W.

    1972-01-01

    Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.

  1. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  2. Identification of severe accident uncertainties

    SciTech Connect

    Rivard, J.B.; Behr, V.L.; Easterling, R.G.; Griesmeyer, J.M.; Haskin, F.E.; Hatch, S.W.; Kolaczkowski, A.M.; Lipinski, R.J.; Sherman, M.P.; Taig, A.R.

    1984-09-01

    Understanding of severe accidents in light-water reactors is currently beset with uncertainty. Because the uncertainties that are present limit the capability to analyze the progression and possible consequences of such accidents, they restrict the technical basis for regulatory actions by the US Nuclear Regulatory Commission (NRC). It is thus necessary to attempt to identify the sources and quantify the influence of these uncertainties. As a part of ongoing NRC severe-accident programs at Sandia National Laboratories, a working group was formed to pool relevant knowledge and experience in assessing the uncertainties attending present (1983) knowledge of severe accidents. This initial report of the Severe Accident Uncertainty Analysis (SAUNA) working group has as its main goal the identification of a consolidated list of uncertainties that affect in-plant processes and systems. Many uncertainties have been identified. A set of key uncertainties summarizes many of the identified uncertainties. Quantification of the influence of these uncertainties, a necessary second step, is not attempted in the present report, although attempts are made qualitatively to demonstrate the relevance of the identified uncertainties.

  3. Investment, regulation, and uncertainty

    PubMed Central

    Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose

    2014-01-01

    As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases.   This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745

  4. The maintenance of uncertainty

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    Introduction Preliminaries State-space dynamics Linearized dynamics of infinitesimal uncertainties Instantaneous infinitesimal dynamics Finite-time evolution of infinitesimal uncertainties Lyapunov exponents and predictability The Baker's apprentice map Infinitesimals and predictability Dimensions The Grassberger-Procaccia algorithm Towards a better estimate from Takens' estimators Space-time-separation diagrams Intrinsic limits to the analysis of geometry Takens' theorem The method of delays Noise Prediction, prophecy, and pontification Introduction Simulations, models and physics Ground rules Data-based models: dynamic reconstructions Analogue prediction Local prediction Global prediction Accountable forecasts of chaotic systems Evaluating ensemble forecasts The annulus Prophecies Aids for more reliable nonlinear analysis Significant results: surrogate data, synthetic data and self-deception Surrogate data and the bootstrap Surrogate predictors: Is my model any good? Hints for the evaluation of new techniques Avoiding simple straw men Feasibility tests for the identification of chaos On detecting "tiny" data sets Building models consistent with the observations Cost functions ι-shadowing: Is my model any good? (reprise) Casting infinitely long shadows (out-of-sample) Distinguishing model error and system sensitivity Forecast error and model sensitivity Accountability Residual predictability Deterministic or stochastic dynamics? Using ensembles to distinguish the expectation from the expected Numerical Weather Prediction Probabilistic prediction with a deterministic model The analysis Constructing and interpreting ensembles The outlook(s) for today Conclusion Summary

  5. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  6. Antarctic Photochemistry: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.; McConnell, Joseph R.

    1999-01-01

    Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.

  7. Uncertainty in Wildfire Behavior

    NASA Astrophysics Data System (ADS)

    Finney, M.; Cohen, J. D.

    2013-12-01

    The challenge of predicting or modeling fire behavior is well recognized by scientists and managers who attempt predictions of fire spread rate or growth. At the scale of the spreading fire, the uncertainty in winds, moisture, fuel structure, and fire location make accurate predictions difficult, and the non-linear response of fire spread to these conditions means that average behavior is poorly represented by average environmental parameters. Even more difficult are estimations of threshold behaviors (e.g. spread/no-spread, crown fire initiation, ember generation and spotting) because the fire responds as a step-function to small changes in one or more environmental variables, translating to dynamical feedbacks and unpredictability. Recent research shows that ignition of fuel particles, itself a threshold phenomenon, depends on flame contact which is absolutely not steady or uniform. Recent studies of flame structure in both spreading and stationary fires reveals that much of the non-steadiness of the flames as they contact fuel particles results from buoyant instabilities that produce quasi-periodic flame structures. With fuel particle ignition produced by time-varying heating and short-range flame contact, future improvements in fire behavior modeling will likely require statistical approaches to deal with the uncertainty at all scales, including the level of heat transfer, the fuel arrangement, and weather.

  8. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  9. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094

  10. New Programming Environments for Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E. P.; Banta, E. R.; Christensen, S.; Cooley, R. L.; Ely, D. M.; Babendreier, J.; Leavesley, G.; Tonkin, M.; Julich, R.

    2005-12-01

    We live in a world of faster computers, better GUI's and visualization technology, increasing international cooperation made possible by new digital infrastructure, new agreements between US federal agencies (such as ISCMEM), new European Union programs (such as Harmoniqua), and greater collaboration between US university scientists through CUAHSI. These changes provide new resources for tackling the difficult job of quantifying how well our models perform. This talk introduces new programming environments that take advantage of these new developments and will change the paradigm of how we develop methods for uncertainty evaluation. For example, the programming environments provided by COSU API, JUPITER API, and Sensitivity/Optimization Toolbox provide enormous opportunities for faster and more meaningful evaluation of uncertainties. Instead of waiting years for ideas and theories to be compared in the complex circumstances of interest to resource managers, these new programming environments will expedite the process. In the new paradigm, unproductive ideas and theories will be revealed more quickly, productive ideas and theories will more quickly be used to address our increasingly difficult water resources problems. As examples, two ideas in JUPITER API applications are presented: uncertainty correction factors that account for system complexities not represented in models, and PPR and OPR statistics used to identify new data needed to reduce prediction uncertainty.

  11. On the worst case uncertainty and its evaluation

    NASA Astrophysics Data System (ADS)

    Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.

    2016-02-01

    The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.

  12. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  13. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  14. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  15. Uncertainty relation in Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng

    2015-04-01

    We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 ⁡ c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  16. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  17. A genetic uncertainty problem.

    PubMed

    Tautz, D

    2000-11-01

    The existence of genes that, when knocked out, result in no obvious phenotype has puzzled biologists for many years. The phenomenon is often ascribed to redundancy in regulatory networks, caused by duplicated genes. However, a recent systematic analysis of data from the yeast genome projects does not support a link between gene duplications and redundancies. An alternative explanation suggests that genes might also evolve by very weak selection, which would mean that their true function cannot be studied in normal laboratory experiments. This problem is comparable to Heisenberg's uncertainty relationship in physics. It is possible to formulate an analogous relationship for biology, which, at its extreme, predicts that the understanding of the full function of a gene might require experiments on an evolutionary scale, involving the entire effective population size of a given species.

  18. Uncertainty as Certaint

    NASA Astrophysics Data System (ADS)

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  19. Living with uncertainty

    SciTech Connect

    Rau, N.; Fong, C.C.; Grigg, C.H.; Silverstein, B.

    1994-11-01

    In the electric utility industry, only one thing can be guaranteed with absolute certainty: one lives and works with many unknowns. Thus, the industry has embraced probability methods to varying degrees over the last 25 years. These techniques aid decision makers in planning, operations, and maintenance by quantifying uncertainty. Examples include power system reliability, production costing simulation, and assessment of environmental factors. A series of brainstorming sessions was conducted by the Application of Probability Methods (APM) Subcommittee of the IEEE Power Engineering Society to identify research and development needs and to ask the question, ''where should we go from here '' The subcommittee examined areas of need in data development, applications, and methods for decision making. The purpose of this article is to share the thoughts of APM members with a broader audience to the findings and to invite comments and participation.

  20. Generalized uncertainty relations

    NASA Astrophysics Data System (ADS)

    Akten, Burcu Elif

    1999-12-01

    The Heisenberg uncertainty relation has been put into a stronger form by Schrödinger and Robertson. This inequality is also canonically invariant. We ask if there are other independent inequalities for higher orders. The aim is to find a systematic way for writing these inequalities. After an overview of the Heisenberg and Schrödinger-Robertson inequalities and their minimal states in Chapter 1, we start by constructing the higher order invariants in Chapter 2. We construct some of the simpler invariants by direct calculation, which suggests a schematic way of representing all invariants. Diagrams describing invariants help us see their structure and their symmetries immediately and various simplifications in their calculations are obtained as a result. With these new tools, a more systematic approach to construct and classify invariants using group theory is introduced next. In Chapter 4, various methods of obtaining higher order inequalities are discussed and compared. First, the original approach of HUR is applied to the next order and a new inequality is obtained by working in a specific frame where the expectation value tensor is in its simplest form. However, this method can not be used for higher orders as the significant simplifications of a specific frame is no longer available. The second method consists of working with a state vector written as a sum of the eigenvectors of the operator (qp)s and has a Gaussian distribution about the state which makes s=0 . Finally, we try to obtain a general inequality for a whole class of invariants by writing the state vector as a sum of harmonic oscillator eigenstates. In Chapter 4, realistic measurements of the canonical variables are discussed in relation to the uncertainty relations. Finally, in Chapter 5, squeezed state generation by an optical parametric oscillator is described as an explicit demonstration of the HUR for the electromagnetic field. A similar approach is developed for testing higher order

  1. Sensitivity and Uncertainty Analysis Shell

    1999-04-20

    SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less

  2. Thermodynamics of Black Holes and the Symmetric Generalized Uncertainty Principle

    NASA Astrophysics Data System (ADS)

    Dutta, Abhijit; Gangopadhyay, Sunandan

    2016-06-01

    In this paper, we have investigated the thermodynamics of Schwarzschild and Reissner-Nordström black holes using the symmetric generalised uncertainty principle which contains correction terms involving momentum and position uncertainty. The mass-temperature relationship and the heat capacity for these black holes have been computed using which the critical and remnant masses have been obtained. The entropy is found to satisfy the area law upto leading order logarithmic corrections and corrections of the form A 2 (which is a new finding in this paper) from the symmetric generalised uncertainty principle.

  3. Uncertainty-like relations of the relative entropy of coherence

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Fei; Chen, Jun; Xing, Wei

    2016-08-01

    Quantum coherence is an important physical resource in quantum computation and quantum information processing. In this paper, we firstly obtain an uncertainty-like expression relating two coherences contained in corresponding local bipartite quantum system. This uncertainty-like inequality shows that the larger the coherence of one subsystem, the less coherence contained in other subsystems. Further, we discuss in detail the uncertainty-like relation among three single-partite quantum systems. We show that the coherence contained in pure tripartite quantum system is greater than the sum of the coherence of all local subsystems.

  4. Parameter uncertainty for ASP models

    SciTech Connect

    Knudsen, J.K.; Smith, C.L.

    1995-10-01

    The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more than one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.

  5. Impact of discharge data uncertainty on nutrient load uncertainty

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars

    2016-04-01

    Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.

  6. Optimizing Advanced Power System Designs Under Uncertainty

    SciTech Connect

    Rubin, E.S.; Diwekar; Frey, H.C.

    1996-12-31

    This paper describes recent developments in ongoing research to develop and demonstrate advanced computer-based methods for dealing with uncertainties that are critical to the design of advanced coal-based power systems. Recent developments include new deterministic and stochastic methods for simulation, optimization, and synthesis of advanced process designs. Results are presented illustrating the use of these new modeling tools for the design and analysis of several advanced systems of current interest to the U.S. Department of Energy, including the technologies of integrated gasification combined cycle (IGCC), advanced pressurized fluid combustion (PFBC), and the externally fired combined cycle (EFCC) process. The new methods developed in this research can be applied generally to any chemical or energy conversion process to reduce the technological risks associated with uncertainties in process performance and cost.

  7. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  8. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  9. Uncertainty analysis of thermoreflectance measurements

    NASA Astrophysics Data System (ADS)

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J.

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.

  10. Uncertainty analysis of thermoreflectance measurements.

    PubMed

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.

  11. Simplified propagation of standard uncertainties

    SciTech Connect

    Shull, A.H.

    1997-06-09

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards` uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper.

  12. Assessing hydrologic prediction uncertainty resulting from soft land cover classification

    NASA Astrophysics Data System (ADS)

    Loosvelt, Lien; De Baets, Bernard; Pauwels, Valentijn R. N.; Verhoest, Niko E. C.

    2014-09-01

    For predictions in ungauged basins (PUB), environmental data is generally not available and needs to be inferred by indirect means. Existing technologies such as remote sensing are valuable tools for estimating the lacking data, as these technologies become more widely available and have a high areal coverage. However, indirect estimates of the environmental characteristics are prone to uncertainty. Hence, an improved understanding of the quality of the estimates and the development of methods for dealing with their associated uncertainty are essential to evolve towards accurate PUB. In this study, the impact of the uncertainty associated with the classification of land cover based on multi-temporal SPOT imagery, resulting from the use of the Random Forests classifier, on the predictions of the hydrologic model TOPLATS is investigated through a Monte Carlo simulation. The results show that the predictions of evapotranspiration, runoff and baseflow are hardly affected by the classification uncertainty when area-averaged predictions are intended, implying that uncertainty propagation is only advisable in case a spatial distribution of the predictions is relevant for decision making or is coupled to other spatially distributed models. Based on the resulting uncertainty map, guidelines for additional data collection are formulated in order to reduce the uncertainty for future model applications. Because a Monte Carlo-based uncertainty analysis is computationally very demanding, especially when complex models are involved, we developed a fast indicative uncertainty assessment method that allows for generating proxies of the Monte Carlo-based result in terms of the mean prediction and its associated uncertainty based on a single model evaluation. These proxies are shown to perform well and provide a good indication of the impact of classification uncertainty on the prediction result.

  13. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  14. Are models, uncertainty, and dispute resolution compatible?

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Wilson, J. L.

    2013-12-01

    Models and their uncertainty often move from an objective use in planning and decision making into the regulatory environment, then sometimes on to dispute resolution through litigation or other legal forums. Through this last transition whatever objectivity the models and uncertainty assessment may have once possessed becomes biased (or more biased) as each party chooses to exaggerate either the goodness of a model, or its worthlessness, depending on which view is in its best interest. If worthlessness is desired, then what was uncertain becomes unknown, or even unknowable. If goodness is desired, then precision and accuracy are often exaggerated and uncertainty, if it is explicitly recognized, encompasses only some parameters or conceptual issues, ignores others, and may minimize the uncertainty that it accounts for. In dispute resolution, how well is the adversarial process able to deal with these biases? The challenge is that they are often cloaked in computer graphics and animations that appear to lend realism to what could be mostly fancy, or even a manufactured outcome. While junk science can be challenged through appropriate motions in federal court, and in most state courts, it not unusual for biased or even incorrect modeling results, or conclusions based on incorrect results, to be permitted to be presented at trial. Courts allow opinions that are based on a "reasonable degree of scientific certainty," but when that 'certainty' is grossly exaggerated by an expert, one way or the other, how well do the courts determine that someone has stepped over the line? Trials are based on the adversary system of justice, so opposing and often irreconcilable views are commonly allowed, leaving it to the judge or jury to sort out the truth. Can advances in scientific theory and engineering practice, related to both modeling and uncertainty, help address this situation and better ensure that juries and judges see more objective modeling results, or at least see

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  16. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    SciTech Connect

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  17. Controllable set analysis for planetary landing under model uncertainties

    NASA Astrophysics Data System (ADS)

    Long, Jiateng; Gao, Ai; Cui, Pingyuan

    2015-07-01

    Controllable set analysis is a beneficial method in planetary landing mission design by feasible entry state selection in order to achieve landing accuracy and satisfy entry path constraints. In view of the severe impact of model uncertainties on planetary landing safety and accuracy, the purpose of this paper is to investigate the controllable set under uncertainties between on-board model and the real situation. Controllable set analysis under model uncertainties is composed of controllable union set (CUS) analysis and controllable intersection set (CIS) analysis. Definitions of CUS and CIS are demonstrated and computational method of them based on Gauss pseudospectral method is presented. Their applications on entry states distribution analysis under uncertainties and robustness of nominal entry state selection to uncertainties are illustrated by situations with ballistic coefficient, lift-to-drag ratio and atmospheric uncertainty in Mars entry. With analysis of CUS and CIS, the robustness of entry state selection and entry trajectory to model uncertainties can be guaranteed, thus enhancing the safety, reliability and accuracy under model uncertainties during planetary entry and landing.

  18. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    PubMed Central

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications. PMID:23012509

  19. Quantitative Assessment of Parametric Uncertainty in Northern Hemisphere PAH Concentrations.

    PubMed

    Thackray, Colin P; Friedman, Carey L; Zhang, Yanxu; Selin, Noelle E

    2015-08-01

    We quantitatively examine the relative importance of uncertainty in emissions and physicochemical properties (including reaction rate constants) to Northern Hemisphere (NH) and Arctic polycyclic aromatic hydrocarbon (PAH) concentrations, using a computationally efficient numerical uncertainty technique applied to the global-scale chemical transport model GEOS-Chem. Using polynomial chaos (PC) methods, we propagate uncertainties in physicochemical properties and emissions for the PAHs benzo[a]pyrene, pyrene and phenanthrene to simulated spatially resolved concentration uncertainties. We find that the leading contributors to parametric uncertainty in simulated concentrations are the black carbon-air partition coefficient and oxidation rate constant for benzo[a]pyrene, and the oxidation rate constants for phenanthrene and pyrene. NH geometric average concentrations are more sensitive to uncertainty in the atmospheric lifetime than to emissions rate. We use the PC expansions and measurement data to constrain parameter uncertainty distributions to observations. This narrows a priori parameter uncertainty distributions for phenanthrene and pyrene, and leads to higher values for OH oxidation rate constants and lower values for European PHE emission rates.

  20. Analysis of Infiltration Uncertainty

    SciTech Connect

    R. McCurley

    2003-10-27

    The primary objectives of this uncertainty analysis are: (1) to develop and justify a set of uncertain parameters along with associated distributions; and (2) to use the developed uncertain parameter distributions and the results from selected analog site calculations done in ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]) to obtain the net infiltration weighting factors for the glacial transition climate. These weighting factors are applied to unsaturated zone (UZ) flow fields in Total System Performance Assessment (TSPA), as outlined in the ''Total System Performance Assessment-License Application Methods and Approach'' (BSC 2002 [160146], Section 3.1) as a method for the treatment of uncertainty. This report is a scientific analysis because no new and mathematical physical models are developed herein, and it is based on the use of the models developed in or for ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). Any use of the term model refers to those developed in the infiltration numerical model report. TSPA License Application (LA) has included three distinct climate regimes in the comprehensive repository performance analysis for Yucca Mountain: present-day, monsoon, and glacial transition. Each climate regime was characterized using three infiltration-rate maps, including a lower- and upper-bound and a mean value (equal to the average of the two boundary values). For each of these maps, which were obtained based on analog site climate data, a spatially averaged value was also calculated by the USGS. For a more detailed discussion of these infiltration-rate maps, see ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). For this Scientific Analysis Report, spatially averaged values were calculated for the lower-bound, mean, and upper-bound climate analogs only for the glacial transition climate regime, within the

  1. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  2. Pandemic influenza: certain uncertainties

    PubMed Central

    Morens, David M.; Taubenberger, Jeffery K.

    2011-01-01

    SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672

  3. Mama Software Features: Uncertainty Testing

    SciTech Connect

    Ruggiero, Christy E.; Porter, Reid B.

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  4. Housing Uncertainty and Childhood Impatience

    ERIC Educational Resources Information Center

    Anil, Bulent; Jordan, Jeffrey L.; Zahirovic-Herbert, Velma

    2011-01-01

    The study demonstrates a direct link between housing uncertainty and children's time preferences, or patience. We show that students who face housing uncertainties through mortgage foreclosures and eviction learn impatient behavior and are therefore at greater risk of making poor intertemporal choices such as dropping out of school. We find that…

  5. Uncertainty in Integrated Assessment Scenarios

    SciTech Connect

    Mort Webster

    2005-10-17

    The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean

  6. Asymmetric metric: An application to dealing with uncertainty

    NASA Astrophysics Data System (ADS)

    Anguelov, R.; Mabula, M.

    2016-10-01

    The uncertainty in mathematical models is often represented via set-valued data, parameters or solutions. We propose a new approach for dealing with such uncertainty, which combines features of validated computing (wrapping the set via a set of computer representable type, e.g., intervals, zonotopes, ellipsoids) and point approximation accompanied by relevant error analysis. More precisely, we consider approximation by a set which is not necessarily an enclosure. The mathematical theory is based on the theory of asymmetric metric spaces, where the metric gives an estimation of the error, while the order induced by the metric provides means for estimating the size of the approximating set.

  7. Uncertainty Quantification in Ocean State Estimation using Hessian Information

    NASA Astrophysics Data System (ADS)

    Kalmikov, A.; Heimbach, P.

    2012-12-01

    We present a second derivative-based (Hessian) method for Uncertainty Quantification (UQ) in large-scale Ocean State Estimation. Matrix-free Hessian-times-vector code of the MIT General Circulation Model (MITgcm) is generated by means of algorithmic differentiation (AD). Lanczos-type numerical algebra tools are then applied for extracting leading rank eigenvectors and eigenvalues used in the UQ algorithm. Computational complexity is reduced by tangent linear (forward-mode) differentiation of the adjoint code, which preserves the efficiency of the checkpointing schemes. The inverse and forward uncertainty propagation algorithm is designed for assimilating observation and control variable uncertainties, and for projecting these uncertainties onto oceanographically relevant target quantities of interest. The algorithm evaluates both reduction of a priori-assumed uncertainty as well as prior-independent information gain. The inverse propagation maps prior and data uncertainties onto posterior uncertainties in each component of the high-dimensional control space. The forward propagation of the posteriors provides a measure of uncertainties of the target quantity. The time-resolving analysis of uncertainty propagation in the ocean model reveals transient and steady state uncertainty regimes. The system is applied to quantifying uncertainties in Drake Passage transport in a global barotropic configuration of the MITgcm. The model is constrained by synthetic observations of sea surface height and velocities. The control space consists of two-dimensional maps of initial conditions (velocities and sea surface height), surface boundary conditions (wind stress), and model parameters (bottom drag), amounting to a 10^5-dimensional space of uncertain variables. It is demonstrated how the choice of observations and their geographic coverage determines the reduction in uncertainties of the estimated transport. The system also yields information on how well control parameters are

  8. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-01-01

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space. PMID:26234197

  9. Reformulating the Quantum Uncertainty Relation

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-01

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the “triviality” problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  10. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-03

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  11. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  12. UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I

    SciTech Connect

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-06-10

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].

  13. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  14. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  15. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    SciTech Connect

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.

  16. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    SciTech Connect

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-06-28

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods.

  17. Uncertainty analysis for Probable Maximum Precipitation estimates

    NASA Astrophysics Data System (ADS)

    Micovic, Zoran; Schaefer, Melvin G.; Taylor, George H.

    2015-02-01

    An analysis of uncertainty associated with Probable Maximum Precipitation (PMP) estimates is presented. The focus of the study is firmly on PMP estimates derived through meteorological analyses and not on statistically derived PMPs. Theoretical PMP cannot be computed directly and operational PMP estimates are developed through a stepwise procedure using a significant degree of subjective professional judgment. This paper presents a methodology for portraying the uncertain nature of PMP estimation by analyzing individual steps within the PMP derivation procedure whereby for each parameter requiring judgment, a set of possible values is specified and accompanied by expected probabilities. The resulting range of possible PMP values can be compared with the previously derived operational single-value PMP, providing measures of the conservatism and variability of the original estimate. To our knowledge, this is the first uncertainty analysis conducted for a PMP derived through meteorological analyses. The methodology was tested on the La Joie Dam watershed in British Columbia. The results indicate that the commonly used single-value PMP estimate could be more than 40% higher when possible changes in various meteorological variables used to derive the PMP are considered. The findings of this study imply that PMP estimates should always be characterized as a range of values recognizing the significant uncertainties involved in PMP estimation. In fact, we do not know at this time whether precipitation is actually upper-bounded, and if precipitation is upper-bounded, how closely PMP estimates approach the theoretical limit.

  18. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  19. Improvement of Statistical Decisions under Parametric Uncertainty

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Rozevskis, Uldis

    2011-10-01

    A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Decision-making under uncertainty is a central problem in statistical inference, and has been formally studied in virtually all approaches to inference. The aim of the present paper is to show how the invariant embedding technique, the idea of which belongs to the authors, may be employed in the particular case of finding the improved statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant decision rule, which has smaller risk than any of the well-known decision rules. To illustrate the proposed technique, application examples are given.

  20. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  1. Fluid flow dynamics under location uncertainty

    NASA Astrophysics Data System (ADS)

    Mémin, Etienne

    2014-03-01

    We present a derivation of a stochastic model of Navier Stokes equations that relies on a decomposition of the velocity fields into a differentiable drift component and a time uncorrelated uncertainty random term. This type of decomposition is reminiscent in spirit to the classical Reynolds decomposition. However, the random velocity fluctuations considered here are not differentiable with respect to time, and they must be handled through stochastic calculus. The dynamics associated with the differentiable drift component is derived from a stochastic version of the Reynolds transport theorem. It includes in its general form an uncertainty dependent "subgrid" bulk formula that cannot be immediately related to the usual Boussinesq eddy viscosity assumption constructed from thermal molecular agitation analogy. This formulation, emerging from uncertainties on the fluid parcels location, explains with another viewpoint some subgrid eddy diffusion models currently used in computational fluid dynamics or in geophysical sciences and paves the way for new large-scales flow modelling. We finally describe an applications of our formalism to the derivation of stochastic versions of the Shallow water equations or to the definition of reduced order dynamical systems.

  2. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions

  3. Uncertainties of Mayak urine data

    SciTech Connect

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3 to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.

  4. Shock Layer Radiation Modeling and Uncertainty for Mars Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth

    2012-01-01

    A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the

  5. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    SciTech Connect

    Khuwaileh, B.A. Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  6. Measurement uncertainty of lactase-containing tablets analyzed with FTIR.

    PubMed

    Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia

    2014-01-01

    Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve <5.0% expanded uncertainty level. Uncertainty arising from the FTIR instrument was <1.0%. The uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.

  7. Uncertainty in Regional Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Digar, Antara

    Effective pollution mitigation is the key to successful air quality management. Although states invest millions of dollars to predict future air quality, the regulatory modeling and analysis process to inform pollution control strategy remains uncertain. Traditionally deterministic ‘bright-line’ tests are applied to evaluate the sufficiency of a control strategy to attain an air quality standard. A critical part of regulatory attainment demonstration is the prediction of future pollutant levels using photochemical air quality models. However, because models are uncertain, they yield a false sense of precision that pollutant response to emission controls is perfectly known and may eventually mislead the selection of control policies. These uncertainties in turn affect the health impact assessment of air pollution control strategies. This thesis explores beyond the conventional practice of deterministic attainment demonstration and presents novel approaches to yield probabilistic representations of pollutant response to emission controls by accounting for uncertainties in regional air quality planning. Computationally-efficient methods are developed and validated to characterize uncertainty in the prediction of secondary pollutant (ozone and particulate matter) sensitivities to precursor emissions in the presence of uncertainties in model assumptions and input parameters. We also introduce impact factors that enable identification of model inputs and scenarios that strongly influence pollutant concentrations and sensitivity to precursor emissions. We demonstrate how these probabilistic approaches could be applied to determine the likelihood that any control measure will yield regulatory attainment, or could be extended to evaluate probabilistic health benefits of emission controls, considering uncertainties in both air quality models and epidemiological concentration-response relationships. Finally, ground-level observations for pollutant (ozone) and precursor

  8. Uncertainties in modelling of tidal flows off Singapore Island

    NASA Astrophysics Data System (ADS)

    Riddle, A. M.

    1996-09-01

    A hydrodynamic model has been constructed for predicting the tidal currents off the south west coast of Singapore Island and has been verified against survey data from the area. A study examining the sensitivity of the model to changes in the input parameters has been undertaken and a statistical experimental design technique has been employed to determine the minimum number of computer runs required to quantify the sensitivity of the model. The uncertainty in predicting the time of the turn of tide is estimated from the statistical results as is the uncertainty in the predicted movement of drogues in both simple and complex tidal flows. The method is extended to estimate the uncertainties in the spread and concentration of a dye patch using a particle tracking random walk model and allowing for the uncertainties in the wind and in horizontal and vertical mixing rates as determined from a series of experimental studies of dye patch spread.

  9. Analysis and Reduction of Complex Networks Under Uncertainty

    SciTech Connect

    Knio, Omar M

    2014-04-09

    This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.

  10. Numerical Simulation and Quantitative Uncertainty Assessment of Microchannel Flow

    NASA Astrophysics Data System (ADS)

    Debusschere, Bert; Najm, Habib; Knio, Omar; Matta, Alain; Ghanem, Roger; Le Maitre, Olivier

    2002-11-01

    This study investigates the effect of uncertainty in physical model parameters on computed electrokinetic flow of proteins in a microchannel with a potassium phosphate buffer. The coupled momentum, species transport, and electrostatic field equations give a detailed representation of electroosmotic and pressure-driven flow, including sample dispersion mechanisms. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. To quantify uncertainty, the governing equations are reformulated using a pseudo-spectral stochastic methodology, which uses polynomial chaos expansions to describe uncertain/stochastic model parameters, boundary conditions, and flow quantities. Integration of the resulting equations for the spectral mode strengths gives the evolution of all stochastic modes for all variables. Results show the spatiotemporal evolution of uncertainties in predicted quantities and highlight the dominant parameters contributing to these uncertainties during various flow phases. This work is supported by DARPA.

  11. Generalized uncertainty principle and black hole thermodynamics

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Sunandan; Dutta, Abhijit; Saha, Anirban

    2014-02-01

    We study the Schwarzschild and Reissner-Nordström black hole thermodynamics using the simplest form of the generalized uncertainty principle (GUP) proposed in the literature. The expressions for the mass-temperature relation, heat capacity and entropy are obtained in both cases from which the critical and remnant masses are computed. Our results are exact and reveal that these masses are identical and larger than the so called singular mass for which the thermodynamics quantities become ill-defined. The expression for the entropy reveals the well known area theorem in terms of the horizon area in both cases upto leading order corrections from GUP. The area theorem written in terms of a new variable which can be interpreted as the reduced horizon area arises only when the computation is carried out to the next higher order correction from GUP.

  12. Realising the Uncertainty Enabled Model Web

    NASA Astrophysics Data System (ADS)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    conversion between uncertainty types, and between the spatial / temporal support of service inputs / outputs. Finally we describe the tools being generated within the UncertWeb project, considering three main aspects: i) Elicitation of uncertainties on model inputs. We are developing tools to enable domain experts to provide judgements about input uncertainties from UncertWeb model components (e.g. parameters in meteorological models) which allow panels of experts to engage in the process and reach a consensus view on the current knowledge / beliefs about that parameter or variable. We are developing systems for continuous and categorical variables as well as stationary spatial fields. ii) Visualisation of the resulting uncertain outputs from the end of the workflow, but also at intermediate steps. At this point we have prototype implementations driven by the requirements from the use cases that motivate UncertWeb. iii) Sensitivity and uncertainty analysis on model outputs. Here we show the design of the overall system we are developing, including the deployment of an emulator framework to allow computationally efficient approaches. We conclude with a summary of the open issues and remaining challenges we are facing in UncertWeb, and provide a brief overview of how we plan to tackle these.

  13. Uncertainty quantification of squeal instability via surrogate modelling

    NASA Astrophysics Data System (ADS)

    Nobari, Amir; Ouyang, Huajiang; Bannister, Paul

    2015-08-01

    One of the major issues that car manufacturers are facing is the noise and vibration of brake systems. Of the different sorts of noise and vibration, which a brake system may generate, squeal as an irritating high-frequency noise costs the manufacturers significantly. Despite considerable research that has been conducted on brake squeal, the root cause of squeal is still not fully understood. The most common assumption, however, is mode-coupling. Complex eigenvalue analysis is the most widely used approach to the analysis of brake squeal problems. One of the major drawbacks of this technique, nevertheless, is that the effects of variability and uncertainty are not included in the results. Apparently, uncertainty and variability are two inseparable parts of any brake system. Uncertainty is mainly caused by friction, contact, wear and thermal effects while variability mostly stems from the manufacturing process, material properties and component geometries. Evaluating the effects of uncertainty and variability in the complex eigenvalue analysis improves the predictability of noise propensity and helps produce a more robust design. The biggest hurdle in the uncertainty analysis of brake systems is the computational cost and time. Most uncertainty analysis techniques rely on the results of many deterministic analyses. A full finite element model of a brake system typically consists of millions of degrees-of-freedom and many load cases. Running time of such models is so long that automotive industry is reluctant to do many deterministic analyses. This paper, instead, proposes an efficient method of uncertainty propagation via surrogate modelling. A surrogate model of a brake system is constructed in order to reproduce the outputs of the large-scale finite element model and overcome the issue of computational workloads. The probability distribution of the real part of an unstable mode can then be obtained by using the surrogate model with a massive saving of

  14. Acoustic field and array response uncertainties in stratified ocean media.

    PubMed

    Hayward, Thomas J; Dhakal, Sagar

    2012-07-01

    The change-of-variables theorem of probability theory is applied to compute acoustic field and array beam power probability density functions (pdfs) in uncertain ocean environments represented by stratified, attenuating ocean waveguide models. Computational studies for one and two-layer waveguides investigate the functional properties of the acoustic field and array beam power pdfs. For the studies, the acoustic parameter uncertainties are represented by parametric pdfs. The field and beam response pdfs are computed directly from the parameter pdfs using the normal-mode representation and the change-of-variables theorem. For two-dimensional acoustic parameter uncertainties of sound speed and attenuation, the field and beam power pdfs exhibit irregular functional behavior and singularities associated with stationary points of the mapping, defined by acoustic propagation, from the parameter space to the field or beam power space. Implications for the assessment of orthogonal polynomial expansion and other methods for computing acoustic field pdfs are discussed.

  15. Visualizing uncertainty about the future.

    PubMed

    Spiegelhalter, David; Pearson, Mike; Short, Ian

    2011-09-01

    We are all faced with uncertainty about the future, but we can get the measure of some uncertainties in terms of probabilities. Probabilities are notoriously difficult to communicate effectively to lay audiences, and in this review we examine current practice for communicating uncertainties visually, using examples drawn from sport, weather, climate, health, economics, and politics. Despite the burgeoning interest in infographics, there is limited experimental evidence on how different types of visualizations are processed and understood, although the effectiveness of some graphics clearly depends on the relative numeracy of an audience. Fortunately, it is increasingly easy to present data in the form of interactive visualizations and in multiple types of representation that can be adjusted to user needs and capabilities. Nonetheless, communicating deeper uncertainties resulting from incomplete or disputed knowledge--or from essential indeterminacy about the future--remains a challenge.

  16. Uncertainty analysis of thermoreflectance measurements.

    PubMed

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica. PMID:26827342

  17. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2015-08-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common.The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints.We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  18. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections.

  19. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. PMID:26695995

  20. Uncertainty quantification in reacting flow modeling.

    SciTech Connect

    Le MaÒitre, Olivier P.; Reagan, Matthew T.; Knio, Omar M.; Ghanem, Roger Georges; Najm, Habib N.

    2003-10-01

    Uncertainty quantification (UQ) in the computational modeling of physical systems is important for scientific investigation, engineering design, and model validation. In this work we develop techniques for UQ based on spectral and pseudo-spectral polynomial chaos (PC) expansions, and we apply these constructions in computations of reacting flow. We develop and compare both intrusive and non-intrusive spectral PC techniques. In the intrusive construction, the deterministic model equations are reformulated using Galerkin projection into a set of equations for the time evolution of the field variable PC expansion mode strengths. The mode strengths relate specific parametric uncertainties to their effects on model outputs. The non-intrusive construction uses sampling of many realizations of the original deterministic model, and projects the resulting statistics onto the PC modes, arriving at the PC expansions of the model outputs. We investigate and discuss the strengths and weaknesses of each approach, and identify their utility under different conditions. We also outline areas where ongoing and future research are needed to address challenges with both approaches.

  1. [DNA computing].

    PubMed

    Błasiak, Janusz; Krasiński, Tadeusz; Popławski, Tomasz; Sakowski, Sebastian

    2011-01-01

    Biocomputers can be an alternative for traditional "silicon-based" computers, which continuous development may be limited due to further miniaturization (imposed by the Heisenberg Uncertainty Principle) and increasing the amount of information between the central processing unit and the main memory (von Neuman bottleneck). The idea of DNA computing came true for the first time in 1994, when Adleman solved the Hamiltonian Path Problem using short DNA oligomers and DNA ligase. In the early 2000s a series of biocomputer models was presented with a seminal work of Shapiro and his colleguas who presented molecular 2 state finite automaton, in which the restriction enzyme, FokI, constituted hardware and short DNA oligomers were software as well as input/output signals. DNA molecules provided also energy for this machine. DNA computing can be exploited in many applications, from study on the gene expression pattern to diagnosis and therapy of cancer. The idea of DNA computing is still in progress in research both in vitro and in vivo and at least promising results of these research allow to have a hope for a breakthrough in the computer science. PMID:21735816

  2. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  3. Dynamical Realism and Uncertainty Propagation

    NASA Astrophysics Data System (ADS)

    Park, Inkwan

    In recent years, Space Situational Awareness (SSA) has become increasingly important as the number of tracked Resident Space Objects (RSOs) continues their growth. One of the most significant technical discussions in SSA is how to propagate state uncertainty in a consistent way with the highly nonlinear dynamical environment. In order to keep pace with this situation, various methods have been proposed to propagate uncertainty accurately by capturing the nonlinearity of the dynamical system. We notice that all of the methods commonly focus on a way to describe the dynamical system as precisely as possible based on a mathematical perspective. This study proposes a new perspective based on understanding dynamics of the evolution of uncertainty itself. We expect that profound insights of the dynamical system could present the possibility to develop a new method for accurate uncertainty propagation. These approaches are naturally concluded in goals of the study. At first, we investigate the most dominant factors in the evolution of uncertainty to realize the dynamical system more rigorously. Second, we aim at developing the new method based on the first investigation enabling orbit uncertainty propagation efficiently while maintaining accuracy. We eliminate the short-period variations from the dynamical system, called a simplified dynamical system (SDS), to investigate the most dominant factors. In order to achieve this goal, the Lie transformation method is introduced since this transformation can define the solutions for each variation separately. From the first investigation, we conclude that the secular variations, including the long-period variations, are dominant for the propagation of uncertainty, i.e., short-period variations are negligible. Then, we develop the new method by combining the SDS and the higher-order nonlinear expansion method, called state transition tensors (STTs). The new method retains advantages of the SDS and the STTs and propagates

  4. Wildfire Decision Making Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Thompson, M.

    2013-12-01

    Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.

  5. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  6. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  7. A flexible numerical approach for quantification of epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoxiao; Park, Eun-Jae; Xiu, Dongbin

    2013-05-01

    In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648-4663], where a framework for numerical treatment of epistemic uncertainty was proposed. The method is based on solving an encapsulation problem, without using any probability information, in a hypercube that encapsulates the unknown epistemic probability space. If more probabilistic information about the epistemic variables is known a posteriori, the solution statistics can then be evaluated at post-process steps. In this paper, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in Lp norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach.

  8. A flexible numerical approach for quantification of epistemic uncertainty

    SciTech Connect

    Chen, Xiaoxiao; Park, Eun-Jae; Xiu, Dongbin

    2013-05-01

    In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648–4663], where a framework for numerical treatment of epistemic uncertainty was proposed. The method is based on solving an encapsulation problem, without using any probability information, in a hypercube that encapsulates the unknown epistemic probability space. If more probabilistic information about the epistemic variables is known a posteriori, the solution statistics can then be evaluated at post-process steps. In this paper, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in L{sup p} norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach.

  9. Principals' Sense of Uncertainty and Organizational Learning Mechanisms

    ERIC Educational Resources Information Center

    Schechter, Chen; Asher, Neomi

    2012-01-01

    Purpose: The purpose of the present study is to examine the effect of principals' sense of uncertainty on organizational learning mechanisms (OLMs) in schools. Design/methodology/approach: Data were collected from 130 school principals (90 women and 40 men) from both Tel-Aviv and Central districts in Israel. After computing the correlation between…

  10. Propagation of Uncertainty in System Parameters of a LWR Model by Sampling MCNPX Calculations - Burnup Analysis

    NASA Astrophysics Data System (ADS)

    Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.

    2014-06-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.

  11. Structural model uncertainty in stochastic simulation

    SciTech Connect

    McKay, M.D.; Morrison, J.D.

    1997-09-01

    Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.

  12. A comparative study of new non-linear uncertainty propagation methods for space surveillance

    NASA Astrophysics Data System (ADS)

    Horwood, Joshua T.; Aristoff, Jeffrey M.; Singh, Navraj; Poore, Aubrey B.

    2014-06-01

    We propose a unified testing framework for assessing uncertainty realism during non-linear uncertainty propagation under the perturbed two-body problem of celestial mechanics, with an accompanying suite of metrics and benchmark test cases on which to validate different methods. We subsequently apply the testing framework to different combinations of uncertainty propagation techniques and coordinate systems for representing the uncertainty. In particular, we recommend the use of a newly-derived system of orbital element coordinates that mitigate the non-linearities in uncertainty propagation and the recently-developed Gauss von Mises filter which, when used in tandem, provide uncertainty realism over much longer periods of time compared to Gaussian representations of uncertainty in Cartesian spaces, at roughly the same computational cost.

  13. Probabilistic uncertainty analysis of an FRF of a structure using a Gaussian process emulator

    NASA Astrophysics Data System (ADS)

    Fricker, Thomas E.; Oakley, Jeremy E.; Sims, Neil D.; Worden, Keith

    2011-11-01

    This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study.

  14. Position-momentum uncertainty relations based on moments of arbitrary order

    SciTech Connect

    Zozor, Steeve; Portesi, Mariela; Sanchez-Moreno, Pablo; Dehesa, Jesus S.

    2011-05-15

    The position-momentum uncertainty-like inequality based on moments of arbitrary order for d-dimensional quantum systems, which is a generalization of the celebrated Heisenberg formulation of the uncertainty principle, is improved here by use of the Renyi-entropy-based uncertainty relation. The accuracy of the resulting lower bound is physico-computationally analyzed for the two main prototypes in d-dimensional physics: the hydrogenic and oscillator-like systems.

  15. A discussion on the Heisenberg uncertainty principle from the perspective of special relativity

    NASA Astrophysics Data System (ADS)

    Nanni, Luca

    2016-09-01

    In this note, we consider the implications of the Heisenberg uncertainty principle (HUP) when computing uncertainties that affect the main dynamical quantities, from the perspective of special relativity. Using the well-known formula for propagating statistical errors, we prove that the uncertainty relations between the moduli of conjugate observables are not relativistically invariant. The new relationships show that, in experiments involving relativistic particles, limitations of the precision of a quantity obtained by indirect calculations may affect the final result.

  16. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix

  17. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    SciTech Connect

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  18. Uncertainty quantification in coronary blood flow simulations: Impact of geometry, boundary conditions and blood viscosity.

    PubMed

    Sankaran, Sethuraman; Kim, Hyun Jin; Choi, Gilwoo; Taylor, Charles A

    2016-08-16

    Computational fluid dynamic methods are currently being used clinically to simulate blood flow and pressure and predict the functional significance of atherosclerotic lesions in patient-specific models of the coronary arteries extracted from noninvasive coronary computed tomography angiography (cCTA) data. One such technology, FFRCT, or noninvasive fractional flow reserve derived from CT data, has demonstrated high diagnostic accuracy as compared to invasively measured fractional flow reserve (FFR) obtained with a pressure wire inserted in the coronary arteries during diagnostic cardiac catheterization. However, uncertainties in modeling as well as measurement results in differences between these predicted and measured hemodynamic indices. Uncertainty in modeling can manifest in two forms - anatomic uncertainty resulting in error of the reconstructed 3D model and physiologic uncertainty resulting in errors in boundary conditions or blood viscosity. We present a data-driven framework for modeling these uncertainties and study their impact on blood flow simulations. The incompressible Navier-Stokes equations are used to model blood flow and an adaptive stochastic collocation method is used to model uncertainty propagation in the Navier-Stokes equations. We perform uncertainty quantification in two geometries, an idealized stenosis model and a patient specific model. We show that uncertainty in minimum lumen diameter (MLD) has the largest impact on hemodynamic simulations, followed by boundary resistance, viscosity and lesion length. We show that near the diagnostic cutoff (FFRCT=0.8), the uncertainty due to the latter three variables are lower than measurement uncertainty, while the uncertainty due to MLD is only slightly higher than measurement uncertainty. We also show that uncertainties are not additive but only slightly higher than the highest single parameter uncertainty. The method presented here can be used to output interval estimates of hemodynamic indices

  19. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2016-07-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.

  20. Communicating Uncertainties on Climate Change

    NASA Astrophysics Data System (ADS)

    Planton, S.

    2009-09-01

    The term of uncertainty in common language is confusing since it is related in one of its most usual sense to what cannot be known in advance or what is subject to doubt. Its definition in mathematics is unambiguous but not widely shared. It is thus difficult to communicate on this notion through media to a wide public. From its scientific basis to the impact assessment, climate change issue is subject to a large number of sources of uncertainties. In this case, the definition of the term is close to its mathematical sense, but the diversity of disciplines involved in the analysis process implies a great diversity of approaches of the notion. Faced to this diversity of approaches, the issue of communicating uncertainties on climate change is thus a great challenge. It is also complicated by the diversity of the targets of the communication on climate change, from stakeholders and policy makers to a wide public. We will present the process chosen by the IPCC in order to communicate uncertainties in its assessment reports taking the example of the guidance note to lead authors of the fourth assessment report. Concerning the communication of uncertainties to a wide public, we will give some examples aiming at illustrating how to avoid the above-mentioned ambiguity when dealing with this kind of communication.

  1. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    SciTech Connect

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  2. GCR environmental models II: Uncertainty propagation methods for GCR environments

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.

    2014-04-01

    In order to assess the astronaut exposure received within vehicles or habitats, accurate models of the ambient galactic cosmic ray (GCR) environment are required. Many models have been developed and compared to measurements, with uncertainty estimates often stated to be within 15%. However, intercode comparisons can lead to differences in effective dose exceeding 50%. This is the second of three papers focused on resolving this discrepancy. The first paper showed that GCR heavy ions with boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. Yet, due to limitations on available data, model development and validation are heavily influenced by comparisons to measurements taken below 500 MeV/n. In the current work, the focus is on developing an efficient method for propagating uncertainties in the ambient GCR environment to effective dose values behind shielding. A simple approach utilizing sensitivity results from the first paper is described and shown to be equivalent to a computationally expensive Monte Carlo uncertainty propagation. The simple approach allows a full uncertainty propagation to be performed once GCR uncertainty distributions are established. This rapid analysis capability may be integrated into broader probabilistic radiation shielding analysis and also allows error bars (representing boundary condition uncertainty) to be placed around point estimates of effective dose.

  3. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  4. Evaluating Uncertainty Estimates Produced by Dose Assessment Models

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Orr, S.

    2001-05-01

    Assessments of the dose and/or risk from contaminated sites and waste disposal facilities may rely on the use of relatively simplified models of subsurface flow and transport. Common simplifications include steady-state, one-dimensional flow; homogeneous and isotropic transport medium properties; and unit hydraulic gradient in the unsaturated zone. Because of their relative computational speed, such simplified models are particularly attractive when the impact of uncertainty in flow and transport needs to be evaluated. Simplifications in the representation of flow and transport have the potential to result in an unrepresentative estimate of uncertainty in dose/risk. `Unrepresentative' is used here to describe an estimate of uncertainty that significantly misrepresents the actual uncertainty. Such misrepresentation may have important consequences for decisions based on the dose/risk assessments. The significance of this concern is evaluated here by comparing test case results from uncertainty assessments conducted using a simplified modeling approach and a more complex/realistic modeling approach. The test case follows the U.S. Nuclear Regulatory Commission's framework for site decommissioning analyses. Subsurface properties are derived from data obtained in the Las Cruces Trench experiments with source term data reflecting an actual decommissioning case. Comparisons between the two approaches include the probability distribution of peak dose, the relative importance of parameters, and the value of site-specific data in reducing uncertainty.

  5. Quantifying Uncertainties in Rainfall Maps from Cellular Communication Networks

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Rios Gaona, M. F.; Overeem, A.; Leijnse, H.

    2014-12-01

    The core idea behind rainfall retrievals from commercial microwave link networks is to measure the decrease in power due to attenuation of the electromagnetic signal by raindrops along the link path. Accurate rainfall measurements are of vital importance in hydrological applications, for instance, flash-flood early-warning systems, agriculture, and climate modeling. Hence, such an alternative technique fulfills the need for measurements with higher resolution in time and space, especially in places where standard rain gauge-networks are scarce or poorly maintained. Rainfall estimation via commercial microwave link networks, at country-wide scales, has recently been demonstrated. Despite their potential applicability in rainfall estimation at higher spatiotemporal resolutions, the uncertainties present in link-based rainfall maps are not yet fully comprehended. Now we attempt to quantify the inherent sources of uncertainty present in interpolated maps computed from commercial microwave link rainfall retrievals. In order to disentangle these sources of uncertainty we identified four main sources of error: 1) microwave link measurements, 2) availability of microwave link measurements, 3) spatial distribution of the network, and 4) interpolation methodology. We computed more than 1000 rainfall fields, for The Netherlands, from real and simulated microwave link data. These rainfall fields were compared to quality-controlled gauge-adjusted radar rainfall maps considered as ground-truth. Thus we were able to quantify the contribution of errors in microwave link measurements to the overall uncertainty. The actual performance of the commercial microwave link network is affected by the intermittent availability of the links, not only in time but also in space. We simulated a fully-operational network in time and space, and thus we quantified the role of the availability of microwave link measurements to the overall uncertainty. This research showed that the largest source of

  6. Users manual for the FORSS sensitivity and uncertainty analysis code system

    SciTech Connect

    Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.

    1981-01-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.

  7. One-Dimensional, Multigroup Cross Section and Design Sensitivity and Uncertainty Analysis Code System - Generalized Perturbation Theory.

    1981-02-02

    Version: 00 SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections (of standard multigroup cross-section sets) and for secondary energy distributions (SED's) of multigroup scattering matrices.

  8. Sub-Heisenberg phase uncertainties

    NASA Astrophysics Data System (ADS)

    Pezzé, Luca

    2013-12-01

    Phase shift estimation with uncertainty below the Heisenberg limit, ΔϕHL∝1/N¯T, where N¯T is the total average number of particles employed, is a mirage of linear quantum interferometry. Recently, Rivas and Luis, [New J. Phys.NJOPFM1367-263010.1088/1367-2630/14/9/093052 14, 093052 (2012)] proposed a scheme to achieve a phase uncertainty Δϕ∝1/N¯Tk, with k an arbitrary exponent. This sparked an intense debate in the literature which, ultimately, does not exclude the possibility to overcome ΔϕHL at specific phase values. Our numerical analysis of the Rivas and Luis proposal shows that sub-Heisenberg uncertainties are obtained only when the estimator is strongly biased. No violation of the Heisenberg limit is found after bias correction or when using a bias-free Bayesian analysis.

  9. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  10. Fertility behaviour under income uncertainty.

    PubMed

    Ranjan, P

    1999-03-01

    A two-period stochastic model of fertility behavior was developed in order to provide an explanation for the staggering decrease in birth rates in former Soviet Republics and Eastern European countries. A link between income uncertainty and fertility behavior was proposed. The increase in uncertainty about future income could lead people to postpone their childbearing decision. This is attributable to the irreversibility of the childbearing decision and the ease with which it may be postponed. A threshold effect is the result, so that individuals above the threshold level of income tend to have a stronger desire to have a child immediately, and those below the threshold tend to wait until the income uncertainty is past. This behavioral pattern could account for the recent decline in birth rates that has accompanied a decreasing per capita income level in most of the former Soviet Republics and the East European countries.

  11. Uncertainty formulations for multislit interferometry

    NASA Astrophysics Data System (ADS)

    Biniok, Johannes C. G.

    2014-12-01

    In the context of (far-field) multislit interferometry we investigate the utility of two formulations of uncertainty in accounting for the complementarity of spatial localization and fringe width. We begin with a characterization of the relevant observables and general considerations regarding the suitability of different types of measures. The detailed analysis shows that both of the discussed uncertainty formulations yield qualitatively similar results, confirming that they correctly capture the relevant tradeoff. One approach, based on an idea of Aharonov and co-workers, is intuitively appealing and relies on a modification of the Heisenberg uncertainty relation. The other approach, developed by Uffink and Hilgevoord for single- and double-slit experiments, is readily applied to multislits. However, it is found that one of the underlying concepts requires generalization and that the choice of the parameters requires more consideration than was known.

  12. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  13. Assessment of uncertainty in parameter evaluation and prediction.

    PubMed

    Meinrath, G; Ekberg, C; Landgren, A; Liljenzin, J O

    2000-02-01

    Like in all experimental science, chemical data is affected by the limited precision of the measurement process. Quality control and traceability of experimental data require suitable approaches to express properly the degree of uncertainty. Noise and bias are nuisance effects reducing the information extractable from experimental data. However, because of the complexity of the numerical data evaluation in many chemical fields, often mean values from data analysis, e.g. multi-parametric curve fitting, are reported only. Relevant information on the interpretation limits, e.g. standard deviations or confidence limits, are either omitted or estimated. Modern techniques for handling of uncertainty in both parameter evaluation and prediction are strongly based on the calculation power of computers. Advantageously, computer-intensive methods like Monte Carlo resampling and Latin Hypercube sampling do not require sophisticated and often unavailable mathematical treatment. The statistical concepts are introduced. Applications of some computer-intensive statistical techniques to chemical problems are demonstrated. PMID:18967855

  14. Fuzzy techniques for subjective workload-score modeling under uncertainties.

    PubMed

    Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina

    2008-12-01

    This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.

  15. Survey and Evaluate Uncertainty Quantification Methodologies

    SciTech Connect

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  16. Uncertainty quantification for ice sheet inverse problems

    NASA Astrophysics Data System (ADS)

    Petra, N.; Ghattas, O.; Stadler, G.; Zhu, H.

    2011-12-01

    Modeling the dynamics of polar ice sheets is critical for projections of future sea level rise. Yet, there remain large uncertainties in the basal boundary conditions and in the non-Newtonian constitutive relations employed within ice sheet models. In this presentation, we consider the problem of estimating uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem-i.e., the posterior probability density-is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal slipperiness field and the Glen's law exponent field). However, under the assumption of Gaussian noise and prior probability densities, and after linearizing the parameter-to-observable map, the posterior density becomes Gaussian, and can therefore be characterized by its mean and covariance. The mean is given by the solution of a nonlinear least squares optimization problem, which is equivalent to a deterministic inverse problem with appropriate interpretation and weighting of the data misfit and regularization terms. To obtain this mean, we solve a deterministic ice sheet inverse problem; here, we infer parameters arising from discretizations of basal slipperiness and rheological exponent fields. For this purpose, we minimize a regularized misfit functional between observed and modeled surface flow velocities. The resulting least squares minimization problem is solved using an adjoint-based inexact Newton method, which uses first and second derivative information. The posterior covariance matrix is given (in the linear-Gaussian case) by the inverse of the Hessian of the least squares cost functional of the deterministic inverse problem. Direct computation of the Hessian matrix is prohibitive, since it would

  17. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    SciTech Connect

    Safta, Cosmin; Najm, Habib N.; Phipps, Eric Todd

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  18. Uncertainty quantification in lattice QCD calculations for nuclear physics

    SciTech Connect

    Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.

    2015-02-05

    The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.

  19. Adaptive second-order sliding mode control with uncertainty compensation

    NASA Astrophysics Data System (ADS)

    Bartolini, G.; Levant, A.; Pisano, A.; Usai, E.

    2016-09-01

    This paper endows the second-order sliding mode control (2-SMC) approach with additional capabilities of learning and control adaptation. We present a 2-SMC scheme that estimates and compensates for the uncertainties affecting the system dynamics. It also adjusts the discontinuous control effort online, so that it can be reduced to arbitrarily small values. The proposed scheme is particularly useful when the available information regarding the uncertainties is conservative, and the classical `fixed-gain' SMC would inevitably lead to largely oversized discontinuous control effort. Benefits from the viewpoint of chattering reduction are obtained, as confirmed by computer simulations.

  20. Uncertainties in offsite consequence analysis

    SciTech Connect

    Young, M.L.; Harper, F.T.; Lui, C.H.

    1996-03-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.

  1. Statistics, Uncertainty, and Transmitted Variation

    SciTech Connect

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  2. Awe, uncertainty, and agency detection.

    PubMed

    Valdesolo, Piercarlo; Graham, Jesse

    2014-01-01

    Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events. PMID:24247728

  3. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  4. Systemic change increases model projection uncertainty

    NASA Astrophysics Data System (ADS)

    Verstegen, Judith; Karssenberg, Derek; van der Hilst, Floor; Faaij, André

    2014-05-01

    the neighbourhood doubled, while the influence of slope and potential yield decreased by 75% and 25% respectively. Allowing these systemic changes to occur in our CA in the future (up to 2022) resulted in an increase in model projection uncertainty by a factor two compared to the assumption of a stationary system. This means that the assumption of a constant model structure is not adequate and largely underestimates uncertainty in the projection. References Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53, 121-136. Verstegen, J.A., Karssenberg, D., van der Hilst, F., Faaij, A.P.C., 2012. Spatio-Temporal Uncertainty in Spatial Decision Support Systems: a Case Study of Changing Land Availability for Bioenergy Crops in Mozambique. Computers , Environment and Urban Systems 36, 30-42. Wald, A., Wolfowitz, J., 1940. On a test whether two samples are from the same population. The Annals of Mathematical Statistics 11, 147-162.

  5. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  6. Adaptive Strategies for Materials Design using Uncertainties

    NASA Astrophysics Data System (ADS)

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-01

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  7. Adaptive Strategies for Materials Design using Uncertainties

    PubMed Central

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-01

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties. PMID:26792532

  8. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  9. Uncertainty in Vs30-based site response

    USGS Publications Warehouse

    Thompson, Eric; Wald, David J.

    2016-01-01

    Methods that account for site response range in complexity from simple linear categorical adjustment factors to sophisticated nonlinear constitutive models. Seismic‐hazard analysis usually relies on ground‐motion prediction equations (GMPEs); within this framework site response is modeled statistically with simplified site parameters that include the time‐averaged shear‐wave velocity to 30 m (VS30) and basin depth parameters. Because VS30 is not known in most locations, it must be interpolated or inferred through secondary information such as geology or topography. In this article, we analyze a subset of stations for which VS30 has been measured to address effects of VS30 proxies on the uncertainty in the ground motions as modeled by GMPEs. The stations we analyze also include multiple recordings, which allow us to compute the repeatable site effects (or empirical amplification factors [EAFs]) from the ground motions. Although all methods exhibit similar bias, the proxy methods only reduce the ground‐motion standard deviations at long periods when compared to GMPEs without a site term, whereas measured VS30 values reduce the standard deviations at all periods. The standard deviation of the ground motions are much lower when the EAFs are used, indicating that future refinements of the site term in GMPEs have the potential to substantially reduce the overall uncertainty in the prediction of ground motions by GMPEs.

  10. Quantification and propagation of disciplinary uncertainty via Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Mantis, George Constantine

    2002-08-01

    Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single

  11. Visualization tools for uncertainty and sensitivity analyses on thermal-hydraulic transients

    NASA Astrophysics Data System (ADS)

    Popelin, Anne-Laure; Iooss, Bertrand

    2014-06-01

    In nuclear engineering studies, uncertainty and sensitivity analyses of simulation computer codes can be faced to the complexity of the input and/or the output variables. If these variables represent a transient or a spatial phenomenon, the difficulty is to provide tool adapted to their functional nature. In this paper, we describe useful visualization tools in the context of uncertainty analysis of model transient outputs. Our application involves thermal-hydraulic computations for safety studies of nuclear pressurized water reactors.

  12. Entropic uncertainty from effective anticommutators

    NASA Astrophysics Data System (ADS)

    Kaniewski, Jedrzej; Tomamichel, Marco; Wehner, Stephanie

    2014-07-01

    We investigate entropic uncertainty relations for two or more binary measurements, for example, spin-1/2 or polarization measurements. We argue that the effective anticommutators of these measurements, i.e., the anticommutators evaluated on the state prior to measuring, are an expedient measure of measurement incompatibility. Based on the knowledge of pairwise effective anticommutators we derive a class of entropic uncertainty relations in terms of conditional Rényi entropies. Our uncertainty relations are formulated in terms of effective measures of incompatibility, which can be certified in a device-independent fashion. Consequently, we discuss potential applications of our findings to device-independent quantum cryptography. Moreover, to investigate the tightness of our analysis we consider the simplest (and very well studied) scenario of two measurements on a qubit. We find that our results outperform the celebrated bound due to Maassen and Uffink [Phys. Rev. Lett. 60, 1103 (1988), 10.1103/PhysRevLett.60.1103] and provide an analytical expression for the minimum uncertainty which also outperforms some recent bounds based on majorization.

  13. Quantification of entanglement via uncertainties

    SciTech Connect

    Klyachko, Alexander A.; Oeztop, Baris; Shumovsky, Alexander S.

    2007-03-15

    We show that entanglement of pure multiparty states can be quantified by means of quantum uncertainties of certain basic observables through the use of a measure that was initially proposed by Klyachko et al. [Appl. Phys. Lett. 88, 124102 (2006)] for bipartite systems.

  14. Uncertainties in radiation flow experiments

    NASA Astrophysics Data System (ADS)

    Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.

    2016-03-01

    Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.

  15. Saccade Adaptation and Visual Uncertainty

    PubMed Central

    Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635

  16. Saccade Adaptation and Visual Uncertainty.

    PubMed

    Souto, David; Gegenfurtner, Karl R; Schütz, Alexander C

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty.

  17. Exploring Uncertainty with Projectile Launchers

    ERIC Educational Resources Information Center

    Orzel, Chad; Reich, Gary; Marr, Jonathan

    2012-01-01

    The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…

  18. Impact of orifice metering uncertainties

    SciTech Connect

    Stuart, J.W. )

    1990-12-01

    In a recent utility study, attributed 38% of its unaccounted-for UAF gas to orifice metering uncertainty biasing caused by straightening vanes. How this was determined and how this applied to the company's orifice meters is described. Almost all (97%) of the company's UAF gas was found to be attributed to identifiable accounting procedures, measurement problems, theft and leakage.

  19. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  20. Quantifying and reducing uncertainties in cancer therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Liu, Zhonglin; Caucci, Luca; Hoppin, John W.

    2015-03-01

    There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.

  1. Quantifying and Reducing Uncertainties in Cancer Therapy

    PubMed Central

    Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Liu, Zhonglin; Caucci, Luca; Hoppin, John W.

    2015-01-01

    There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs. PMID:26166931

  2. Groundwater management under sustainable yield uncertainty

    NASA Astrophysics Data System (ADS)

    Delottier, Hugo; Pryet, Alexandre; Dupuy, Alain

    2015-04-01

    The definition of the sustainable yield (SY) of a groundwater system consists in adjusting pumping rates so as to avoid groundwater depletion and preserve environmental flows. Once stakeholders have defined which impacts can be considered as "acceptable" for both environmental and societal aspects, hydrogeologists use groundwater models to estimate the SY. Yet, these models are based on a simplification of actual groundwater systems, whose hydraulic properties are largely unknown. As a result, the estimated SY is subject to "predictive" uncertainty. We illustrate the issue with a synthetic homogeneous aquifer system in interaction with a stream for steady state and transient conditions. Simulations are conducted with the USGS MODFLOW finite difference model with the river-package. A synthetic dataset is first generated with the numerical model that will further be considered as the "observed" state. In a second step, we conduct the calibration operation as hydrogeologists dealing with real word, unknown groundwater systems. The RMSE between simulated hydraulic heads and the synthetic "observed" values is used as objective function. But instead of simply "calibrating" model parameters, we explore the value of the objective function in the parameter space (hydraulic conductivity, storage coefficient and total recharge). We highlight the occurrence of an ellipsoidal "null space", where distinct parameter sets lead to equally low values for the objective function. The optimum of the objective function is not unique, which leads to a range of possible values for the SY. With a large confidence interval for the SY, the use of modeling results for decision-making is challenging. We argue that prior to modeling operations, efforts must be invested so as to narrow the intervals of likely parameter values. Parameter space exploration is effective to estimate SY uncertainty, but not efficient because of its computational burden and is therefore inapplicable for real world

  3. LDRD Final Report: Capabilities for Uncertainty in Predictive Science.

    SciTech Connect

    Phipps, Eric Todd; Eldred, Michael S; Salinger, Andrew G.; Webster, Clayton G.

    2008-10-01

    Predictive simulation of systems comprised of numerous interconnected, tightly coupled com-ponents promises to help solve many problems of scientific and national interest. Howeverpredictive simulation of such systems is extremely challenging due to the coupling of adiverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure togain computational efficiency. The traditional layering of uncertainty quantification aroundnonlinear solution processes is inverted to allow for heterogeneous uncertainty quantificationmethods to be applied to each component in a coupled system. Moreover this approachallows stochastic dimension reduction techniques to be applied at each coupling interface.The mathematical feasibility of these ideas is investigated in this report, and mathematicalformulations for the resulting stochastically coupled nonlinear systems are developed.3

  4. A new look at the theory uncertainty of ɛ K

    NASA Astrophysics Data System (ADS)

    Ligeti, Zoltan; Sala, Filippo

    2016-09-01

    The observable ɛ K is sensitive to flavor violation at some of the highest scales. While its experimental uncertainty is at the half percent level, the theoretical one is in the ballpark of 15%. We explore the nontrivial dependence of the theory prediction and uncertainty on various conventions, like the phase of the kaon fields. In particular, we show how such a rephasing allows to make the short-distance contribution of the box diagram with two charm quarks, η cc , purely real. Our results allow to slightly reduce the total theoretical uncertainty of ɛ K , while increasing the relative impact of the imaginary part of the long distance contribution, underlining the need to compute it reliably. We also give updated bounds on the new physics operators that contribute to ɛK.

  5. Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien

    2015-04-01

    Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a

  6. Global hydrology modelling and uncertainty: running multiple ensembles with a campus grid.

    PubMed

    Gosling, Simon N; Bretherton, Dan; Haines, Keith; Arnell, Nigel W

    2010-09-13

    Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this 'climate model structural uncertainty' is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.

  7. Uncertainty and Sensitivity Assessments of GPS and GIS Integrated Applications for Transportation

    PubMed Central

    Hong, Sungchul; Vonderohe, Alan P.

    2014-01-01

    Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data. PMID:24518894

  8. Uncertainty and sensitivity assessments of GPS and GIS integrated applications for transportation.

    PubMed

    Hong, Sungchul; Vonderohe, Alan P

    2014-02-10

    Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data.

  9. Using Cross-Section Uncertainty Data to Estimate Biases

    SciTech Connect

    Mueller, Don; Rearden, Bradley T

    2008-01-01

    Ideally, computational method validation is performed by modeling critical experiments that are very similar, neutronically, to the model used in the safety analysis. Similar, in this context, means that the neutron multiplication factors (k{sub eff}) of the safety analysis model and critical experiment model are affected in the same way to the same degree by variations (or errors) in the same nuclear data. Where similarity is demonstrated, the computational bias calculated using the critical experiment model results is 'applicable' to the safety analysis model. Unfortunately, criticality safety analysts occasionally find that the safety analysis models include some feature or material for which adequately similar well-defined critical experiments do not exist to support validation. For example, the analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to assign an additional administrative margin to compensate for the validation weakness or to conclude that the impact on the calculated bias and bias uncertainty is negligible. Due to advances in computer programs and the evolution of cross-section uncertainty data, analysts can use the sensitivity and uncertainty analyses tools implemented in the SCALE TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides that are under-represented or not present in the critical experiments. This paper discusses the method, computer codes, and data used to estimate the potential contribution toward the computational bias of individual nuclides. The results from application of the method to fission products in a burnup credit model are presented.

  10. Comparison of k0-NAA measurement results with calculated uncertainties for reference samples

    NASA Astrophysics Data System (ADS)

    Smodiš, B.; Bučar, T.

    2010-10-01

    Standard samples of well-defined geometry containing accurately known amounts of Co, Fe, Gd, Mo, Nd, Sb, Se, W, Zn and Zr were prepared and assayed using k0-based neutron activation analysis ( k0-NAA). Measurement results for six independent determinations of each standard spiked sample were evaluated and compared to calculated uncertainties using the computer program ERON, which computes uncertainty propagation factors from the relevant formulae and calculates the overall uncertainty following the internationally recommended approach. The calculated relative expanded uncertainties U ( k=2), which ranged from 6 to 11% for particular nuclides/gamma-lines agreed well with the measurements results thus proving the correctness of the applied approach. One of the important measures to further reduce uncertainties in the k0-NAA measurements is to review and re-determine more accurately specific nuclear constants involved in the relevant calculations.

  11. An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2014-01-01

    This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.

  12. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  13. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  14. Visualizing Flow of Uncertainty through Analytical Processes.

    PubMed

    Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu

    2012-12-01

    Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.

  15. An introductory guide to uncertainty analysis in environmental and health risk assessment. Environmental Restoration Program

    SciTech Connect

    Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.

    1994-12-01

    This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete.

  16. Estimation of uncertainty for fatigue growth rate at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.

    2014-01-01

    Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.

  17. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in

  18. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  19. Uncertainty Quantification for Monitoring of Civil Structures from Vibration Measurements

    NASA Astrophysics Data System (ADS)

    Döhler, Michael; Mevel, Laurent

    2014-05-01

    Health Monitoring of civil structures can be performed by detecting changes in the modal parameters of a structure, or more directly in the measured vibration signals. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. In this talk, a strategy for quantifying the uncertainties of modal parameter estimates from a subspace-based system identification approach is presented and the importance of uncertainty quantification in monitoring approaches is shown. Furthermore, a damage detection method is presented, which is based on the direct comparison of the measured vibration signals without estimating modal parameters, while taking the statistical uncertainty in the signals correctly into account. The usefulness of both strategies is illustrated on data from a progressive damage action on a prestressed concrete bridge. References E. Carden and P. Fanning. Vibration based condition monitoring: a review. Structural Health Monitoring, 3(4):355-377, 2004. M. Döhler and L. Mevel. Efficient multi-order uncertainty computation for stochastic subspace identification. Mechanical Systems and Signal Processing, 38(2):346-366, 2013. M. Döhler, L. Mevel, and F. Hille. Subspace-based damage detection under changes in the ambient excitation statistics. Mechanical Systems and Signal Processing, 45(1):207-224, 2014.

  20. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval.

    PubMed

    Liu, Mengchen; Liu, Shixia; Zhu, Xizhou; Liao, Qinying; Wei, Furu; Pan, Shimei

    2016-01-01

    Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data. PMID:26529705

  1. VARBOOT: A spatial bootstrap program for semivariogram uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Olea, Ricardo A.

    2012-04-01

    In applied geostatistics, the semivariogram is commonly estimated from experimental data, producing an empirical semivariogram for a specified number of discrete lags. In a second stage, a model defined by a few parameters is fitted to the empirical semivariogram. As the experimental data are usually few and sparsely located, there is considerable uncertainty about the calculated semivariogram values (uncertainty of the empirical semivariogram) and about the parameters of any model fitted to them (uncertainty of the estimated model parameters). In this paper, the uncertainty in the modeling of the empirical semivariogram is numerically assessed by the generalized bootstrap, which is an extension of the classic bootstrap procedure modified for spatially correlated data. A computer program is described and provided for the assessment of those uncertainties. In particular, the program provides for the empirical semivariogram: the standard errors, the bootstrap percentile confidence intervals, the complete variance-covariance matrix, standard deviation correlation matrix. A public domain, natural dataset is used to illustrate the performance of the program. A promising result is that, for any distance, the median of the bootstrap distribution for the empirical semivariogram approximates more closely the underlying semivariogram than the estimate derived from the empirical sample.

  2. Uncertainties in stellar ages provided by grid techniques

    NASA Astrophysics Data System (ADS)

    Prada Moroni, P. G.; Valle, G.; Dell'Omodarme, M.; Degl'Innocenti, S.

    2016-09-01

    The determination of the age of single stars by means of grid-based techniques is a well established method. We discuss the impact on these estimates of the uncertainties in several ingredients routinely adopted in stellar computations. The systematic bias on age determination caused by varying the assumed initial helium abundance, the mixing-length and convective core overshooting parameters, and the microscopic diffusion are quantified and compared with the statistical error owing to the current uncertainty in the observations. The typical uncertainty in the observations accounts for 1 σ statistical relative error in age determination ranging on average from about -35 % to +42 %, depending on the mass. However, the age's relative error strongly depends on the evolutionary phase and can be higher than 120 % for stars near the zero-age main-sequence, while it is typically about 20 % or lower in the advanced main-sequence phase. A variation of ± 1 in the helium-to-metal enrichment ratio induces a quite modest systematic bias on age estimates. The maximum bias due to the presence of the convective core overshooting is -7 % for β = 0.2 and -13 % for β = 0.4. The main sources of bias are the uncertainty in the mixing-length value and the neglect of microscopic diffusion, which account each for a bias comparable to the random error uncertainty.

  3. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval.

    PubMed

    Liu, Mengchen; Liu, Shixia; Zhu, Xizhou; Liao, Qinying; Wei, Furu; Pan, Shimei

    2016-01-01

    Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.

  4. Simple fuzzy logic estimation of flow forecast uncertainty

    NASA Astrophysics Data System (ADS)

    Danhelka, Jan

    2010-05-01

    Fuzzy logic is recognized as useful tool to support for decision making under uncertainty. As such some methods for reservoir operation or real time flood management were developed. Maskey (2004) describes method of model uncertainty assessment based on qualitative expert judgement and its representation in fuzzy space. It is based on categorical judging of the quality and importance of selected model parameters (processes). The method was modified in order to reflect varying uncertainty of single model realization (forecast) with respect to inputting precipitation forecast (QPF). Two model uncertainty parameters were distinguish: 1) QPF, 2) model uncertainty due to concept and parameters. The approach was tested and applied for Černá river basin (127 km2) in southern Bohemia for the period from January 2008. Aqualog forecasting system (SAC-SMA implemented) is used for real time forecasting within the basin. It provides deterministic QPF based (NWP ALADIN) forecast with 48 h lead time. The aim of the study was to estimate the uncertainty of the forecast using simple fuzzy procedure. QPF uncertainty dominates the total uncertainty of hydrological forecast in condition of the Czech Republic. Therefore an evaluation of QPF performance was done for the basin. Based on detected quantiles of relative difference the fuzzy expression of QPF exceedance probability was done to represent the quality of QPF parameter. We further assumed that the importance of QPF parameter is proportional to its quality. Model uncertainty was qualitatively estimated to be moderate both in quality and importance. Than the fuzzy sum of both parameters was computed. The output is than fitted to deterministic flow forecast using the highest forecasted flow and its known reference in fuzzy space (determined according to QPF performance evaluation). The case study provided promising results in the meaning of Brier skill score (0.24) as well as in comparison of forecasted to expected distribution

  5. NASA Team 2 Sea Ice Concentration Algorithm Retrieval Uncertainty

    NASA Technical Reports Server (NTRS)

    Brucker, Ludovic; Cavalieri, Donald J.; Markus, Thorsten; Ivanoff, Alvaro

    2014-01-01

    Satellite microwave radiometers are widely used to estimate sea ice cover properties (concentration, extent, and area) through the use of sea ice concentration (IC) algorithms. Rare are the algorithms providing associated IC uncertainty estimates. Algorithm uncertainty estimates are needed to assess accurately global and regional trends in IC (and thus extent and area), and to improve sea ice predictions on seasonal to interannual timescales using data assimilation approaches. This paper presents a method to provide relative IC uncertainty estimates using the enhanced NASA Team (NT2) IC algorithm. The proposed approach takes advantage of the NT2 calculations and solely relies on the brightness temperatures (TBs) used as input. NT2 IC and its associated relative uncertainty are obtained for both the Northern and Southern Hemispheres using the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) TB. NT2 IC relative uncertainties estimated on a footprint-by-footprint swath-by-swath basis were averaged daily over each 12.5-km grid cell of the polar stereographic grid. For both hemispheres and throughout the year, the NT2 relative uncertainty is less than 5%. In the Southern Hemisphere, it is low in the interior ice pack, and it increases in the marginal ice zone up to 5%. In the Northern Hemisphere, areas with high uncertainties are also found in the high IC area of the Central Arctic. Retrieval uncertainties are greater in areas corresponding to NT2 ice types associated with deep snow and new ice. Seasonal variations in uncertainty show larger values in summer as a result of melt conditions and greater atmospheric contributions. Our analysis also includes an evaluation of the NT2 algorithm sensitivity to AMSR-E sensor noise. There is a 60% probability that the IC does not change (to within the computed retrieval precision of 1%) due to sensor noise, and the cumulated probability shows that there is a 90% chance that the IC varies by less than

  6. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  7. Uncertainty Quantification in State Estimation using the Probabilistic Collocation Method

    SciTech Connect

    Lin, Guang; Zhou, Ning; Ferryman, Thomas A.; Tuffner, Francis K.

    2011-03-23

    In this study, a new efficient uncertainty quantification technique, probabilistic collocation method (PCM) on sparse grid points is employed to enable the evaluation of uncertainty in state estimation. The PCM allows us to use just a small number of ensembles to quantify the uncertainty in estimating the state variables of power systems. By sparse grid points, the PCM approach can handle large number of uncertain parameters in power systems with relatively lower computational cost, when comparing with classic Monte Carlo (MC) simulations. The algorithm and procedure is outlined and we demonstrate the capability and illustrate the application of PCM on sparse grid points approach on uncertainty quantification in state estimation of the IEEE 14 bus model as an example. MC simulations have also been conducted to verify accuracy of the PCM approach. By comparing the results obtained from MC simulations with PCM results for mean and standard deviation of uncertain parameters, it is evident that the PCM approach is computationally more efficient than MC simulations.

  8. An Uncertainty Quantification System for Tabular Equations of State

    NASA Astrophysics Data System (ADS)

    Carpenter, John; Robinson, Allen; Debusschere, Bert; Mattsson, Ann; Drake, Richard; Rider, William

    2013-06-01

    Providing analysts with information regarding the accuracy of computational models is key for enabling predictive design and engineering. Uncertainty in material models can make significant contributions to the overall uncertainty in calculations. As a first step toward tackling this large problem, we present an uncertainty quantification system for tabular equations of state (EOS). First a posterior distribution of EOS model parameters is inferred using Bayes rule and a set of experimental and computational data. EOS tables are generated for parameter states sampled from the posterior distribution. A new unstructured triangular table format allows for capturing multi-phase model behavior. A principal component analysis then reduces this set of tables to a mean table and most significant perturbations. This final set of tables is provided to hydrocodes for performing simulations using standard non-intrusive uncertainty propagation methods. A multi-phase aluminum model is used to demonstrate the system. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Automated Generation of Tabular Equations of State with Uncertainty Information

    NASA Astrophysics Data System (ADS)

    Carpenter, John H.; Robinson, Allen C.; Debusschere, Bert J.; Mattsson, Ann E.

    2015-06-01

    As computational science pushes toward higher fidelity prediction, understanding the uncertainty associated with closure models, such as the equation of state (EOS), has become a key focus. Traditional EOS development often involves a fair amount of art, where expert modelers may appear as magicians, providing what is felt to be the closest possible representation of the truth. Automation of the development process gives a means by which one may demystify the art of EOS, while simultaneously obtaining uncertainty information in a manner that is both quantifiable and reproducible. We describe our progress on the implementation of such a system to provide tabular EOS tables with uncertainty information to hydrocodes. Key challenges include encoding the artistic expert opinion into an algorithmic form and preserving the analytic models and uncertainty information in a manner that is both accurate and computationally efficient. Results are demonstrated on a multi-phase aluminum model. *Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. The effect of model uncertainty on cooperation in sensorimotor interactions

    PubMed Central

    Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. A.

    2013-01-01

    Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions. PMID:23945266

  11. Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer

    NASA Astrophysics Data System (ADS)

    Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain

    2015-09-01

    Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.

  12. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    SciTech Connect

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.; Ratzlaff, Pete; Siemiginowska, Aneta E-mail: vkashyap@cfa.harvard.edu E-mail: rpete@head.cfa.harvard.edu

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.

  13. Extended uncertainty from first principles

    NASA Astrophysics Data System (ADS)

    Costa Filho, Raimundo N.; Braga, João P. M.; Lira, Jorge H. S.; Andrade, José S.

    2016-04-01

    A translation operator acting in a space with a diagonal metric is introduced to describe the motion of a particle in a quantum system. We show that the momentum operator and, as a consequence, the uncertainty relation now depend on the metric. It is also shown that, for any metric expanded up to second order, this formalism naturally leads to an extended uncertainty principle (EUP) with a minimum momentum dispersion. The Ehrenfest theorem is modified to include an additional term related to a tidal force arriving from the space curvature introduced by the metric. For one-dimensional systems, we show how to map a harmonic potential to an effective potential in Euclidean space using different metrics.

  14. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  15. Individual differences in causal uncertainty.

    PubMed

    Weary, G; Edwards, J A

    1994-08-01

    This article presents a scale that measures chronic individual differences in people's uncertainty about their ability to understand and detect cause-and-effect relationships in the social world: the Causal Uncertainty Scale (CUS). The results of Study 1 indicated that the scale has good internal and adequate test-retest reliability. Additionally, the results of a factor analysis suggested that the scale appears to be tapping a single construct. Study 2 examined the convergent and discriminant validity of the scale, and Studies 3 and 4 examined the predictive and incremental validity of the scale. The importance of the CUS to work on depressives' social information processing and for basic research and theory on human social judgment processes is discussed.

  16. Aspects of complementarity and uncertainty

    NASA Astrophysics Data System (ADS)

    Vathsan, Radhika; Qureshi, Tabish

    2016-08-01

    The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.

  17. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the

  18. Age models and their uncertainties

    NASA Astrophysics Data System (ADS)

    Marwan, N.; Rehfeld, K.; Goswami, B.; Breitenbach, S. F. M.; Kurths, J.

    2012-04-01

    The usefulness of a proxy record is largely dictated by accuracy and precision of its age model, i.e., its depth-age relationship. Only if age model uncertainties are minimized correlations or lead-lag relations can be reliably studied. Moreover, due to different dating strategies (14C, U-series, OSL dating, or counting of varves), dating errors or diverging age models lead to difficulties in comparing different palaeo proxy records. Uncertainties in the age model are even more important if an exact dating is necessary in order to calculate, e.g., data series of flux or rates (like dust flux records, pollen deposition rates). Several statistical approaches exist to handle the dating uncertainties themselves and to estimate the age-depth relationship. Nevertheless, linear interpolation is still the most commonly used method for age modeling. The uncertainties of a certain event at a given time due to the dating errors are often even completely neglected. Here we demonstrate the importance of considering dating errors and implications for the interpretation of variations in palaeo-climate proxy records from stalagmites (U-series dated). We present a simple approach for estimating age models and their confidence levels based on Monte Carlo methods and non-linear interpolation. This novel algorithm also allows for removing age reversals. Our approach delivers a time series of a proxy record with a value range for each age depth also, if desired, on an equidistant time axis. The algorithm is implemented in interactive scripts for use with MATLAB®, Octave, and FreeMat.

  19. Blade tip timing (BTT) uncertainties

    NASA Astrophysics Data System (ADS)

    Russhard, Pete

    2016-06-01

    Blade Tip Timing (BTT) is an alternative technique for characterising blade vibration in which non-contact timing probes (e.g. capacitance or optical probes), typically mounted on the engine casing (figure 1), and are used to measure the time at which a blade passes each probe. This time is compared with the time at which the blade would have passed the probe if it had been undergoing no vibration. For a number of years the aerospace industry has been sponsoring research into Blade Tip Timing technologies that have been developed as tools to obtain rotor blade tip deflections. These have been successful in demonstrating the potential of the technology, but rarely produced quantitative data, along with a demonstration of a traceable value for measurement uncertainty. BTT technologies have been developed under a cloak of secrecy by the gas turbine OEM's due to the competitive advantages it offered if it could be shown to work. BTT measurements are sensitive to many variables and there is a need to quantify the measurement uncertainty of the complete technology and to define a set of guidelines as to how BTT should be applied to different vehicles. The data shown in figure 2 was developed from US government sponsored program that bought together four different tip timing system and a gas turbine engine test. Comparisons showed that they were just capable of obtaining measurement within a +/-25% uncertainty band when compared to strain gauges even when using the same input data sets.

  20. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  1. Uncertainty propagation in nuclear forensics.

    PubMed

    Pommé, S; Jerome, S M; Venchiarutti, C

    2014-07-01

    Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent-daughter pairs and the need for more precise half-life data is examined.

  2. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  3. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  4. Measuring uncertainty by extracting fuzzy rules using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.

    1991-01-01

    Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.

  5. Parameter uncertainty in biochemical models described by ordinary differential equations.

    PubMed

    Vanlier, J; Tiemann, C A; Hilbers, P A J; van Riel, N A W

    2013-12-01

    Improved mechanistic understanding of biochemical networks is one of the driving ambitions of Systems Biology. Computational modeling allows the integration of various sources of experimental data in order to put this conceptual understanding to the test in a quantitative manner. The aim of computational modeling is to obtain both predictive as well as explanatory models for complex phenomena, hereby providing useful approximations of reality with varying levels of detail. As the complexity required to describe different system increases, so does the need for determining how well such predictions can be made. Despite efforts to make tools for uncertainty analysis available to the field, these methods have not yet found widespread use in the field of Systems Biology. Additionally, the suitability of the different methods strongly depends on the problem and system under investigation. This review provides an introduction to some of the techniques available as well as gives an overview of the state-of-the-art methods for parameter uncertainty analysis.

  6. Concepts and Practice of Verification, Validation, and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Oberkampf, W. L.

    2014-12-01

    Verification and validation (V&V) are the primary means to assess the numerical and physics modeling accuracy, respectively, in computational simulation. Code verification assesses the reliability of the software coding and the numerical algorithms used in obtaining a solution, while solution verification addresses numerical error estimation of the computational solution of a mathematical model for a specified set of initial and boundary conditions. Validation assesses the accuracy of the mathematical model as compared to experimentally measured response quantities of the system being modeled. As these experimental data are typically available only for simplified subsystems or components of the system, model validation commonly provides limited ability to assess model accuracy directly. Uncertainty quantification (UQ), specifically in regard to predictive capability of a mathematical model, attempts to characterize and estimate the total uncertainty for conditions where no experimental data are available. Specific sources of uncertainty that can impact the total predictive uncertainty are: the assumptions and approximations in the formulation of the mathematical model, the error incurred in the numerical solution of the discretized model, the information available for stochastic input data for the system, and the extrapolation of the mathematical model to conditions where no experimental data are available. This presentation will briefly discuss the principles and practices of VVUQ from both the perspective of computational modeling and simulation, as well as the difficult issue of estimating predictive capability. Contrasts will be drawn between weak and strong code verification testing, and model validation as opposed to model calibration. Closing remarks will address what needs to be done to improve the value of information generated by computational simulation for improved decision-making.

  7. Two new kinds of uncertainty relations

    NASA Technical Reports Server (NTRS)

    Uffink, Jos

    1994-01-01

    We review a statistical-geometrical and a generalized entropic approach to the uncertainty principle. Both approaches provide a strengthening and generalization of the standard Heisenberg uncertainty relations, but in different directions.

  8. Quantifying uncertainties in the microvascular transport of nanoparticles

    PubMed Central

    Lee, Tae-Rin; Greene, M. Steven; Jiang, Zhen; Kopacz, Adrian M.; Decuzzi, Paolo; Wing, Wei Chen; Liu, Wing Kam

    2014-01-01

    The character of nanoparticle dispersion in the microvasculature is a driving factor in nanoparticle-based therapeutics and bio-sensing. It is difficult, with current experimental and engineering capability, to understand dispersion of nanoparticles because their vascular system is more complex than mouse models and because nanoparticle dispersion is so sensitive to in vivo environments. Furthermore, uncertainty can not be ignored due to the high variation of location-specific vessel characteristics as well as variation across patients. In this paper, a computational method that considers uncertainty is developed to predict nanoparticle dispersion and transport characteristics in the microvasculature with a three step process. First, a computer simulation method is developed to predict blood flow and the dispersion of nanoparticles in the microvessels. Second, experiments for nanoparticle dispersion coefficients are combined with results from the computer model to suggest the true values of its unknown and unmeasurable parameters – red blood cell deformability and red blood cell interaction – using the Bayesian statistical framework. Third, quantitative predictions for nanoparticle tranpsort in the tumor microvasculature are made that consider uncertainty in the vessel diameter, flow velocity, and hematocrit. Our results show that nanoparticle transport is highly sensitive to the microvasculature. PMID:23872851

  9. Quantifying Snow Volume Uncertainty from Repeat Terrestrial Laser Scanning Observations

    NASA Astrophysics Data System (ADS)

    Gadomski, P. J.; Hartzell, P. J.; Finnegan, D. C.; Glennie, C. L.; Deems, J. S.

    2014-12-01

    Terrestrial laser scanning (TLS) systems are capable of providing rapid, high density, 3D topographic measurements of snow surfaces from increasing standoff distances. By differencing snow surface with snow free measurements within a common scene, snow depths and volumes can be estimated. These data can support operational water management decision-making when combined with measured or modeled snow densities to estimate basin water content, evaluate in-situ data, or drive operational hydrologic models. In addition, change maps from differential TLS scans can also be used to support avalanche control operations to quantify loading patterns for both pre-control planning and post-control assessment. However, while methods for computing volume from TLS point cloud data are well documented, a rigorous quantification of the volumetric uncertainty has yet to be presented. Using repeat TLS data collected at the Arapahoe Basin Ski Area in Summit County, Colorado, we demonstrate the propagation of TLS point measurement and cloud registration uncertainties into 3D covariance matrices at the point level. The point covariances are then propagated through a volume computation to arrive at a single volume uncertainty value. Results from two volume computation methods are compared and the influence of data voids produced by occlusions examined.

  10. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  11. Uncertainty quantification in capacitive RF MEMS switches

    NASA Astrophysics Data System (ADS)

    Pax, Benjamin J.

    propagation of uncertainty are performed using this surrogate model. The first step in the analysis is Bayesian calibration of the creep related parameters. A computational model of the frog-leg varactor is created, and the computed creep deflection of the device over 800 hours is used to generate a surrogate model using a polynomial chaos expansion in Hermite polynomials. Parameters related to the creep phenomenon are calibrated using Bayesian calibration with experimental deflection data from the frog-leg device. The calibrated input distributions are subsequently propagated through a surrogate gPC model for the PRISM MEMS switch to produce probability density functions of the maximum membrane deflection of the membrane over several thousand hours. The assumptions related to the Bayesian calibration and forward propagation are analyzed to determine the sensitivity to these assumptions of the calibrated input distributions and propagated output distributions of the PRISM device. The work is an early step in understanding the role of geometric variability, model uncertainty, numerical errors and experimental uncertainties in the long-term performance of RF-MEMS.

  12. Risk Analysis and Uncertainty: Implications for Counselling

    ERIC Educational Resources Information Center

    Hassenzahl, David

    2004-01-01

    Over the past two decades, the risk analysis community has made substantial advances in understanding and describing uncertainty. Uncertainty is ubiquitous, complex, both quantitative and qualitative in nature, and often irreducible. Uncertainty thus creates a challenge when using risk analysis to evaluate the rationality of group and individual…

  13. Regarding Uncertainty in Teachers and Teaching

    ERIC Educational Resources Information Center

    Helsing, Deborah

    2007-01-01

    The literature on teacher uncertainty suggests that it is a significant and perhaps inherent feature of teaching. Yet there is disagreement about the effects of these uncertainties on teachers as well as about the ways that teachers should regard them. Recognition of uncertainties can be viewed alternatively as a liability or an asset to effective…

  14. Numerical approach for quantification of epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Jakeman, John; Eldred, Michael; Xiu, Dongbin

    2010-06-01

    In the field of uncertainty quantification, uncertainty in the governing equations may assume two forms: aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty can be characterised by known probability distributions whilst epistemic uncertainty arises from a lack of knowledge of probabilistic information. While extensive research efforts have been devoted to the numerical treatment of aleatory uncertainty, little attention has been given to the quantification of epistemic uncertainty. In this paper, we propose a numerical framework for quantification of epistemic uncertainty. The proposed methodology does not require any probabilistic information on uncertain input parameters. The method only necessitates an estimate of the range of the uncertain variables that encapsulates the true range of the input variables with overwhelming probability. To quantify the epistemic uncertainty, we solve an encapsulation problem, which is a solution to the original governing equations defined on the estimated range of the input variables. We discuss solution strategies for solving the encapsulation problem and the sufficient conditions under which the numerical solution can serve as a good estimator for capturing the effects of the epistemic uncertainty. In the case where probability distributions of the epistemic variables become known a posteriori, we can use the information to post-process the solution and evaluate solution statistics. Convergence results are also established for such cases, along with strategies for dealing with mixed aleatory and epistemic uncertainty. Several numerical examples are presented to demonstrate the procedure and properties of the proposed methodology.

  15. Errors and Uncertainty in Physics Measurement.

    ERIC Educational Resources Information Center

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  16. Quantum mechanics and the generalized uncertainty principle

    SciTech Connect

    Bang, Jang Young; Berger, Micheal S.

    2006-12-15

    The generalized uncertainty principle has been described as a general consequence of incorporating a minimal length from a theory of quantum gravity. We consider a simple quantum mechanical model where the operator corresponding to position has discrete eigenvalues and show how the generalized uncertainty principle results for minimum uncertainty wave packets.

  17. Extension of sensitivity and uncertainty analysis for long term dose assessment of high level nuclear waste disposal sites to uncertainties in the human behaviour.

    PubMed

    Albrecht, Achim; Miquel, Stéphan

    2010-01-01

    Biosphere dose conversion factors are computed for the French high-level geological waste disposal concept and to illustrate the combined probabilistic and deterministic approach. Both (135)Cs and (79)Se are used as examples. Probabilistic analyses of the system considering all parameters, as well as physical and societal parameters independently, allow quantification of their mutual impact on overall uncertainty. As physical parameter uncertainties decreased, for example with the availability of further experimental and field data, the societal uncertainties, which are less easily constrained, particularly for the long term, become more and more significant. One also has to distinguish uncertainties impacting the low dose portion of a distribution from those impacting the high dose range, the latter having logically a greater impact in an assessment situation. The use of cumulative probability curves allows us to quantify probability variations as a function of the dose estimate, with the ratio of the probability variation (slope of the curve) indicative of uncertainties of different radionuclides. In the case of (135)Cs with better constrained physical parameters, the uncertainty in human behaviour is more significant, even in the high dose range, where they increase the probability of higher doses. For both radionuclides, uncertainties impact more strongly in the intermediate than in the high dose range. In an assessment context, the focus will be on probabilities of higher dose values. The probabilistic approach can furthermore be used to construct critical groups based on a predefined probability level and to ensure that critical groups cover the expected range of uncertainty.

  18. Systematic and random uncertainties of HOAPS-3.2 evaporation

    NASA Astrophysics Data System (ADS)

    Kinzel, Julian; Fennig, Karsten; Schröder, Marc; Andersson, Axel; Bumke, Karl; Dietzsch, Felix

    2015-04-01

    The German Research Foundation (DFG) funds the research programme 'FOR1740 - Atlantic freshwater cycle', which aims at analysing and better understanding the freshwater budget of the Atlantic Ocean and the role of freshwater fluxes (evaporation minus precipitation) in context of oceanic surface salinity variability. It is well-known that these freshwater fluxes play an essential role in the global hydrological cycle and thus act as a key boundary condition for coupled ocean-atmosphere general circulation models. However, it remains unclear as to how uncertain evaporation (E) and precipitation (P ) are. Once quantified, freshwater flux fields and their underlying total uncertainty (systematic plus random) may be assimilated into ocean models to compute ocean transports and run-off estimates, which in turn serve as a stringent test on the quality of the input data. The Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data (HOAPS) (Andersson et al. (2010), Fennig et al. (2012)) is an entirely satellite-based climatology, based on microwave radiometers, overcoming the lack of oceanic in-situ records. Its most current version, HOAPS-3.2, comprises 21 years (1987-2008) of pixel-level resolution data of numerous geophysical parameters over the global ice-free oceans. Amongst others, these include wind speed (u), near-surface specific humidity (q), and sea surface temperature (SST). Their uncertainties essentially contribute to the uncertainty in latent heat flux (LHF) and consequently to that of evaporation (E). Here, we will present HOAPS-3.2 pixel-level total uncertainty estimates of evaporation, based on a full error propagation of uncertainties in u, q, and SST. Both systematic and random uncertainty components are derived on the basis of collocated match-ups of satellite pixels, selected buoys, and ship records. The in-situ data is restricted to 1995 until 2008 and is provided by the Seewetteramt Hamburg as well as ICOADS Version 2.5 (Woodruff et al

  19. Calculating Measurement Uncertainties for Mass Spectrometry Data

    NASA Astrophysics Data System (ADS)

    Essex, R. M.; Goldberg, S. A.

    2006-12-01

    A complete and transparent characterization of measurement uncertainty is fundamentally important to the interpretation of analytical results. We have observed that the calculation and reporting of uncertainty estimates for isotopic measurement from a variety of analytical facilities are inconsistent, making it difficult to compare and evaluate data. Therefore, we recommend an approach to uncertainty estimation that has been adopted by both US national metrology facilities and is becoming widely accepted within the analytical community. This approach is outlined in the ISO "Guide to the Expression of Uncertainty in Measurement" (GUM). The GUM approach to uncertainty estimation includes four major steps: 1) Specify the measurand; 2) Identify uncertainty sources; 3) Quantify components by determining the standard uncertainty (u) for each component; and 4) Calculate combined standard uncertainty (u_c) by using established propagation laws to combine the various components. To obtain a desired confidence level, the combined standard uncertainty is multiplied by a coverage factor (k) to yield an expanded uncertainty (U). To be consistent with the GUM principles, it is also necessary create an uncertainty budget, which is a listing of all the components comprising the uncertainty and their relative contribution to the combined standard uncertainty. In mass spectrometry, Step 1 is normally the determination of an isotopic ratio for a particular element. Step 2 requires the identification of the many potential sources of measurement variability and bias including: gain, baseline, cup efficiency, Schottky noise, counting statistics, CRM uncertainties, yield calibrations, linearity calibrations, run conditions, and filament geometry. Then an equation expressing the relationship of all of the components to the measurement value must be written. To complete Step 3, these potential sources of uncertainty must be characterized (Type A or Type B) and quantified. This information

  20. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  1. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  2. Medical Need, Equality, and Uncertainty.

    PubMed

    Horne, L Chad

    2016-10-01

    Many hold that distributing healthcare according to medical need is a requirement of equality. Most egalitarians believe, however, that people ought to be equal on the whole, by some overall measure of well-being or life-prospects; it would be a massive coincidence if distributing healthcare according to medical need turned out to be an effective way of promoting equality overall. I argue that distributing healthcare according to medical need is important for reducing individuals' uncertainty surrounding their future medical needs. In other words, distributing healthcare according to medical need is a natural feature of healthcare insurance; it is about indemnity, not equality. PMID:27196999

  3. Medical Need, Equality, and Uncertainty.

    PubMed

    Horne, L Chad

    2016-10-01

    Many hold that distributing healthcare according to medical need is a requirement of equality. Most egalitarians believe, however, that people ought to be equal on the whole, by some overall measure of well-being or life-prospects; it would be a massive coincidence if distributing healthcare according to medical need turned out to be an effective way of promoting equality overall. I argue that distributing healthcare according to medical need is important for reducing individuals' uncertainty surrounding their future medical needs. In other words, distributing healthcare according to medical need is a natural feature of healthcare insurance; it is about indemnity, not equality.

  4. Propagating uncertainty from hydrology into human health risk assessment

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Maxwell, R. M.

    2013-12-01

    Hydro-geologic modeling and uncertainty assessment of flow and transport parameters can be incorporated into human health risk (both cancer and non-cancer) assessment to better understand the associated uncertainties. This interdisciplinary approach is needed now more than ever as societal problems concerning water quality are increasingly interdisciplinary as well. For example, uncertainty can originate from environmental conditions such as a lack of information or measurement error, or can manifest as variability, such as differences in physiological and exposure parameters between individuals. To complicate the matter, traditional risk assessment methodologies are independent of time, virtually neglecting any temporal dependence. Here we present not only how uncertainty and variability can be incorporated into a risk assessment, but also how time dependent risk assessment (TDRA) allows for the calculation of risk as a function of time. The development of TDRA and the inclusion of quantitative risk analysis in this research provide a means to inform decision makers faced with water quality issues and challenges. The stochastic nature of this work also provides a means to address the question of uncertainty in management decisions, a component that is frequently difficult to quantify. To illustrate this new formulation and to investigate hydraulic mechanisms for sensitivity, an example of varying environmental concentration signals resulting from rate dependencies in geochemical reactions is used. Cancer risk is computed and compared using environmental concentration ensembles modeled with sorption as 1) a linear equilibrium assumption and 2) first order kinetics. Results show that the up scaling of these small-scale processes controls the distribution, magnitude, and associated uncertainty of cancer risk.

  5. Uncertainty and Sensitivity Analysis in Performance Assessment for the Waste Isolation Pilot Plant

    SciTech Connect

    Helton, J.C.

    1998-12-17

    The Waste Isolation Pilot Plant (WIPP) is under development by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. This development has been supported by a sequence of performance assessments (PAs) carried out by Sandla National Laboratories (SNL) to assess what is known about the WIPP and to provide .tidance for future DOE research and development activities. Uncertainty and sensitivity analysis procedures based on Latin hypercube sampling and regression techniques play an important role in these PAs by providing an assessment of the uncertainty in important analysis outcomes and identi~ing the sources of thk uncertainty. Performance assessments for the WIPP are conceptually and computational] y interesting due to regulatory requirements to assess and display the effects of both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, where stochastic uncertainty arises from the possible disruptions that could occur over the 10,000 yr regulatory period associated with the WIPP and subjective uncertainty arises from an inability to unambi-aously characterize the many models and associated parameters required in a PA for the WIPP. The interplay between uncertainty analysis, sensitivity analysis, stochastic uncertainty and subjective uncertainty are discussed and illustrated in the context of a recent PA carried out by SNL to support an application by the DOE to the U.S. Environmental Protection Agency for the certification of the WIPP for the disposal of TRU waste.

  6. Performance of Trajectory Models with Wind Uncertainty

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.

    2009-01-01

    Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.

  7. Entropic uncertainty relation in de Sitter space

    NASA Astrophysics Data System (ADS)

    Jia, Lijuan; Tian, Zehua; Jing, Jiliang

    2015-02-01

    The uncertainty principle restricts our ability to simultaneously predict the measurement outcomes of two incompatible observables of a quantum particle. However, this uncertainty could be reduced and quantified by a new Entropic Uncertainty Relation (EUR). By the open quantum system approach, we explore how the nature of de Sitter space affects the EUR. When the quantum memory A freely falls in the de Sitter space, we demonstrate that the entropic uncertainty acquires an increase resulting from a thermal bath with the Gibbons-Hawking temperature. And for the static case, we find that the temperature coming from both the intrinsic thermal nature of the de Sitter space and the Unruh effect associated with the proper acceleration of A also brings effect on entropic uncertainty, and the higher the temperature, the greater the uncertainty and the quicker the uncertainty reaches the maximal value. And finally the possible mechanism behind this phenomenon is also explored.

  8. Uncertainty in liquid chromatographic analysis of pharmaceutical product: influence of various uncertainty sources.

    PubMed

    Leito, Signe; Mölder, Kadi; Künnapas, Allan; Herodes, Koit; Leito, Ivo

    2006-07-14

    An ISO GUM measurement uncertainty estimation procedure was developed for a liquid-chromatographic drug quality control method-assay of simvastatin in drug formulation. In quantification of uncertainty components several practical approaches for including difficult-to-estimate uncertainty sources (such as uncertainty due to peak integration, uncertainty due to nonlinearity of the calibration curve, etc.) have been presented. Detailed analysis of contributions of the various uncertainty sources was carried out. The results were calculated based on different definitions of the measurand and it was demonstrated that unequivocal definition of the measurand is essential in order to get rigorous uncertainty estimate. Two different calibration methods - single-point (1P) and five-point (5P) - were used and the obtained uncertainties and uncertainty budgets were compared. Results calculated using 1P and 5P calibrations agree very well. The uncertainty estimate for 1P is only slightly larger than with 5P calibration. PMID:16756985

  9. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests

  10. Quantification of uncertainties of the tsunami risk in Cascadia

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Sarri, A.; Day, S. J.; Liu, X.; Dias, F.

    2013-12-01

    We first show new realistic simulations of earthquake-generated tsunamis in Cascadia (Western Canada and USA) using VOLNA. VOLNA is a solver of nonlinear shallow water equations on unstructured meshes that is accelerated on the new GPU system Emerald. Primary outputs from these runs are tsunami inundation maps, accompanied by site-specific wave trains and flow velocity histories. The variations in inputs (here seabed deformations due to earthquakes) are time-varying shapes difficult to sample, and they require an integrated statistical and geophysical analysis. Furthermore, the uncertainties in the bathymetry require extensive investigation and optimization of the resolutions at the source and impact. Thus we need to run VOLNA for well chosen combinations of the inputs and the bathymetry to reflect the various sources of uncertainties, and we interpolate in between using a so-called statistical emulator that keeps track of the additional uncertainties due to the interpolation itself. We present novel adaptive sequential designs that enable such choices of the combinations for our Gaussian Process (GP) based emulator in order to maximize the information from the limited number of runs of VOLNA that can be computed. GPs show strength in the approximation of the response surface but suffer from large computer costs associated with the fitting. Hence, a careful selection of the inputs is necessary to optimize the trade-off fit versus computations. Finally, we also propose to assess the frequencies and intensities of the earthquakes along the Cascadia subduction zone that have been demonstrated by geological palaeoseismic, palaeogeodetic and tsunami deposit studies in Cascadia. As a result, the hazard assessment aims to reflect the multiple non-linearities and uncertainties for the tsunami risk in Cascadia.

  11. Characterizing Uncertainty in High-Density Maps from Multiparental Populations

    PubMed Central

    Ahfock, Daniel; Wood, Ian; Stephen, Stuart; Cavanagh, Colin R.

    2014-01-01

    Multiparental populations are of considerable interest in high-density genetic mapping due to their increased levels of polymorphism and recombination relative to biparental populations. However, errors in map construction can have significant impact on QTL discovery in later stages of analysis, and few methods have been developed to quantify the uncertainty attached to the reported order of markers or intermarker distances. Current methods are computationally intensive or limited to assessing uncertainty only for order or distance, but not both simultaneously. We derive the asymptotic joint distribution of maximum composite likelihood estimators for intermarker distances. This approach allows us to construct hypothesis tests and confidence intervals for simultaneously assessing marker-order instability and distance uncertainty. We investigate the effects of marker density, population size, and founder distribution patterns on map confidence in multiparental populations through simulations. Using these data, we provide guidelines on sample sizes necessary to map markers at sub-centimorgan densities with high certainty. We apply these approaches to data from a bread wheat Multiparent Advanced Generation Inter-Cross (MAGIC) population genotyped using the Illumina 9K SNP chip to assess regions of uncertainty and validate them against the recently released pseudomolecule for the wheat chromosome 3B. PMID:25236453

  12. Uncertainty quantification for large-scale ocean circulation predictions.

    SciTech Connect

    Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

    2010-09-01

    Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

  13. Uncertainty quantification in virtual surgery predictions for single ventricle palliation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2014-11-01

    Hemodynamic results from numerical simulations of physiology in patients are invariably presented as deterministic quantities without assessment of associated confidence. Recent advances in cardiovascular simulation and Uncertainty Analysis can be leveraged to challenge this paradigm and to quantify the variability of output quantities of interest, of paramount importance to complement clinical decision making. Physiological variability and errors are responsible for the uncertainty typically associated with measurements in the clinic; starting from a characterization of these quantities in probability, we present applications in the context of estimating the distributions of lumped parameters in 0D models of single-ventricle circulation. We also present results in virtual Fontan palliation surgery, where the variability of both local and systemic hemodynamic indicators is inferred from the uncertainty in pre-operative clinical measurements. Efficient numerical algorithms are required to mitigate the computational cost of propagating the uncertainty through multiscale coupled 0D-3D models of pulsatile flow at the cavopulmonary connection. This work constitutes a first step towards systematic application of robust numerical simulations to virtual surgery predictions.

  14. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  15. Synthesis and Control of Flexible Systems with Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Lim, Kyong B.

    2009-01-01

    An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.

  16. Dosimetric uncertainty in prostate cancer proton radiotherapy

    SciTech Connect

    Lin Liyong; Vargas, Carlos; Hsi Wen; Indelicato, Daniel; Slopsema, Roelf; Li Zuofeng; Yeung, Daniel; Horne, Dave; Palta, Jatinder

    2008-11-15

    Purpose: The authors we evaluate the uncertainty in proton therapy dose distribution for prostate cancer due to organ displacement, varying penumbra width of proton beams, and the amount of rectal gas inside the rectum. Methods and Materials: Proton beam treatment plans were generated for ten prostate patients with a minimum dose of 74.1 cobalt gray equivalent (CGE) to the planning target volume (PTV) while 95% of the PTV received 78 CGE. Two lateral or lateral oblique proton beams were used for each plan. The authors we investigated the uncertainty in dose to the rectal wall (RW) and the bladder wall (BW) due to organ displacement by comparing the dose-volume histograms (DVH) calculated with the original or shifted contours. The variation between DVHs was also evaluated for patients with and without rectal gas in the rectum for five patients who had 16 to 47 cc of visible rectal gas in their planning computed tomography (CT) imaging set. The uncertainty due to the varying penumbra width of the delivered protons for different beam setting options on the proton delivery system was also evaluated. Results: For a 5 mm anterior shift, the relative change in the RW volume receiving 70 CGE dose (V{sub 70}) was 37.9% (5.0% absolute change in 13.2% of a mean V{sub 70}). The relative change in the BW volume receiving 70 CGE dose (V{sub 70}) was 20.9% (4.3% absolute change in 20.6% of a mean V{sub 70}) with a 5 mm inferior shift. A 2 mm penumbra difference in beam setting options on the proton delivery system resulted in the relative variations of 6.1% (0.8% absolute change) and 4.4% (0.9% absolute change) in V{sub 70} of RW and BW, respectively. The data show that the organ displacements produce absolute DVH changes that generally shift the entire isodose line while maintaining the same shape. The overall shape of the DVH curve for each organ is determined by the penumbra and the distance of the target in beam's eye view (BEV) from the block edge. The beam setting option

  17. Lithological Uncertainty Expressed by Normalized Compression Distance

    NASA Astrophysics Data System (ADS)

    Jatnieks, J.; Saks, T.; Delina, A.; Popovs, K.

    2012-04-01

    prediction by partial matching (PPM), used for computing the NCD metric, is highly dependant on context. We assign unique symbols for aggregate lithology types and serialize the borehole logs into text strings, where the string length represents a normalized borehole depth. This encoding ensures that both lithology types as well as depth and sequence of strata is comparable in a form most native to the universal data compression software that calculates the pairwise NCD dissimilarity matrix. The NCD results can be used for generalization of the Quaternary structure using spatial clustering followed by a Voronoi tessellation using boreholes as generator points. After dissolving cluster membership identifiers of the borehole Voronoi polygons in GIS environment, regions representing similar lithological structure can be visualized. The exact number of regions and their homogeneity depends on parameters of the clustering solution. This study is supported by the European Social Fund project No. 2009/0212/1DP/1.1.1.2.0/09/APIA/VIAA/060 Keywords: geological uncertainty, lithological uncertainty, generalization, information distance, normalized compression distance, data compression

  18. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  19. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    PubMed

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  20. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE PAGES

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  1. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    SciTech Connect

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.

  2. Uncertainty in gridded CO2 emissions estimates

    NASA Astrophysics Data System (ADS)

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; Marland, Gregg; Woodard, Dawn

    2016-05-01

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from the use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. Uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.

  3. Induction of models under uncertainty

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter

    1986-01-01

    This paper outlines a procedure for performing induction under uncertainty. This procedure uses a probabilistic representation and uses Bayes' theorem to decide between alternative hypotheses (theories). This procedure is illustrated by a robot with no prior world experience performing induction on data it has gathered about the world. The particular inductive problem is the formation of class descriptions both for the tutored and untutored cases. The resulting class definitions are inherently probabilistic and so do not have any sharply defined membership criterion. This robot example raises some fundamental problems about induction; particularly, it is shown that inductively formed theories are not the best way to make predictions. Another difficulty is the need to provide prior probabilities for the set of possible theories. The main criterion for such priors is a pragmatic one aimed at keeping the theory structure as simple as possible, while still reflecting any structure discovered in the data.

  4. Measuring the uncertainty of coupling

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaojun; Shang, Pengjian

    2015-06-01

    A new information-theoretic measure, called coupling entropy, is proposed here to detect the causal links in complex systems by taking into account the inner composition alignment of temporal structure. It is a permutation-based asymmetric association measure to infer the uncertainty of coupling between two time series. The coupling entropy is found to be effective in the analysis of Hénon maps, where different noises are added to test its accuracy and sensitivity. The coupling entropy is also applied to analyze the relationship between unemployment rate and CPI change in the U.S., where the CPI change turns out to be the driving variable while the unemployment rate is the responding one.

  5. Dopamine, uncertainty and TD learning

    PubMed Central

    Niv, Yael; Duff, Michael O; Dayan, Peter

    2005-01-01

    Substantial evidence suggests that the phasic activities of dopaminergic neurons in the primate midbrain represent a temporal difference (TD) error in predictions of future reward, with increases above and decreases below baseline consequent on positive and negative prediction errors, respectively. However, dopamine cells have very low baseline activity, which implies that the representation of these two sorts of error is asymmetric. We explore the implications of this seemingly innocuous asymmetry for the interpretation of dopaminergic firing patterns in experiments with probabilistic rewards which bring about persistent prediction errors. In particular, we show that when averaging the non-stationary prediction errors across trials, a ramping in the activity of the dopamine neurons should be apparent, whose magnitude is dependent on the learning rate. This exact phenomenon was observed in a recent experiment, though being interpreted there in antipodal terms as a within-trial encoding of uncertainty. PMID:15953384

  6. Uncertainties drive arsenic rule delay

    SciTech Connect

    Pontius, F.W.

    1995-04-01

    The US Environmental Protection Agency (USEPA) is under court order to sign a proposed rule for arsenic by Nov. 30, 1995. The agency recently announced that it will not meet this deadline, citing the need to gather additional information. Development of a National Interim Primary Drinking Water Regulation for arsenic has been delayed several times over the past 10 years because of uncertainties regarding health issues and costs associated with compliance. The early history of development of the arsenic rule has been reviewed. Only recent developments are reviewed here. The current maximum contaminant level (MCL) for arsenic in drinking water is 0.05 mg/L. This MCL was set in 1975, based on the 1962 US Public Health Standards. The current Safe Drinking Water Act (SDWA) requires that the revised arsenic MCL be set as close to the MCL goal (MCLG) as is feasible using best technology, treatment techniques, or other means and taking cost into consideration.

  7. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  8. Optimisation of decision making under uncertainty throughout field lifetime: A fractured reservoir example

    NASA Astrophysics Data System (ADS)

    Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin

    2016-10-01

    Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty

  9. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  10. Optimal Climate Protection Policies Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Weber, M.; Barth, V.; Hasselmann, K.; Hooss, G.

    A cost-benefit analysis for greenhouse warming based on a globally integrated cou- pled climate-macro economic cost model SIAM2 (Structural Integrated Assessment Model) is used to compute optimal paths of global CO2 emissions. The aim of the model is to minimize the net time-integrated sum of climate damage and mitigation costs (or maximize the economic and social welfare). The climate model is repre- sented by a nonlinear impulse-response model (NICCS) calibrated against a coupled ocean-atmosphere general circulation model and a three-dimensional global carbon cycle model. The latest version of the economic module is based a macro economic growth model, which is designed to capture not only the interactions between cli- mate damages and economic development, but also the conflicting goals of individual firms and society (government). The model includes unemployment, limited fossil fuel resources, endogenous and stochastic exogenous technological development (unpre- dictable labor or fuel efficiency innovations of random impact amplitude at random points in time). One objective of the project is to examine optimal climate protection policies in the presence of uncertainty. A stochastic model is introduced to simulate the development of technology as well as climate change and climate damages. In re- sponse to this (stochastic) prediction, the fiscal policy is adjusted gradually in a series of discrete steps. The stochastic module includes probability-based methods, sensitiv- ity studies and formal szenario analysis.

  11. Monte Carlo uncertainty estimation for an oscillating-vessel viscosity measurement

    SciTech Connect

    K. Horne; H. Ban; R. Fielding; R. Kennedy

    2012-08-01

    This paper discusses the initial design and evaluation of a high temperature viscosity measurement system with the focus on the uncertainty assessment. Numerical simulation of the viscometer is used to estimate viscosity uncertainties through the Monte Carlo method. The simulation computes the system response for a particular set of inputs (viscosity, moment of inertia, spring constant and hysteretic damping), and the viscosity is calculated using two methods: the Roscoe approximate solution and a numerical-fit method. For numerical fitting, a residual function of the logarithmic decay of oscillation amplitude and oscillation period is developed to replace the residual function of angular oscillation, which is mathematically stiff. The results of this study indicate that the method using computational solution of the equations and fitting for the parameters should be used, since it almost always out-performs the Roscoe approximation in uncertainty. The hysteretic damping and spring stiffness uncertainties translate into viscosity uncertainties almost directly, whereas the moment of inertial and vessel-height uncertainties are magnified approximately two-fold. As the hysteretic damping increases, so does the magnification of its uncertainty, therefore it should be minimized in the system design. The result of this study provides a general guide for the design and application of all oscillation-vessel viscosity measurement systems.

  12. a Surrogate Hybrid Three Dimensional Model for Fluid-Induced Fracture Propagation Under Conditions of Uncertainty

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Lomov, I.; Ryerson, F. J.; Glascoe, L. G.

    2011-12-01

    Numerical simulations become increasingly widespread to support decision-making and policy-making processes in energy-related emerging technologies such as enhanced geothermal systems, extraction of tight-gas to name a few. However, numerical models typically have uncertainty associated with their inputs (parametric, conceptual and structural), leading to uncertainty in model outputs. Effective abstraction of model results to decision-making requires proper characterization, propagation, and analysis of that uncertainty. Propagation of uncertainty often relies on complex multiphysics models. For instance, fluid-induced fracturing calls for hydro-mechanical, or hydro-thermal-mechanical or hydro-thermal-mechanical-chemical coupling. For the past decade several complex coupled deterministic models have been proposed to address the hydro-fracking problem with moderate successes. Despite that these models can be used as drivers for the uncertainty quantification, they are numerically and computationally cumbersome. In this paper, we present a surrogate model that can handle, for instance, 1) the hydromechanical coupling with minimum computational costs, 2) the tracking of simultaneous propagation of hundreds of fracture tips, with propagation velocities proportional to the stress intensity factor at each crack tip, 3) and the propagation of uncertainty from inputs to outputs, for example via Monte Carlo simulation. We also present a novel hybrid modeling scheme designed for propagating uncertainty and performing a global sensitivity analysis, while maintaining the quantitative rigor of the analysis by providing confidence intervals on predictions. (Prepared by LLNL under Contract DE-AC52-07NA27344).

  13. Numerical modeling and uncertainty analysis of light emitting diodes for photometric measurements

    NASA Astrophysics Data System (ADS)

    Khan, Mohammed Z. U.; Abbas, Mohammed; Al-Hadhrami, Luai M.

    2015-06-01

    With the rapid evolution of new, energy-efficient solid-state lighting (SSL) systems, a requirement has risen for new performance metrics and measurement methods to address their unique construction and operating conditions. In this paper, light propagation characteristics in light emitting diodes are analyzed for measurement uncertainty through numerical modeling and simulation. A general 2D EM simulator with PML boundary conditions is formulated to solve Maxwell's equations using finite-difference time domain (FDTD) numerical method to describe the light propagation in LEDs. A practical GaN LED used in SSL systems is simulated for light propagation. The optical properties of dispersive materials are modeled using multi-pole Lorentz-Drude model. The input dipole source for the LED structure is modeled explicitly through a Gaussian pulse line source at a central wavelength of 460 nm corresponding to GaN emission. Finally, the expression for combined standard uncertainty in the light extraction efficiency due to uncertainties in inputs such as emission in the active layer and EM fields is developed using the GUM law of propagation of uncertainties. The uncertainty in GaN LED emission wavelength obtained from Full Width Half Maximum (FWHM) of the emission spectrum is computed to be 16.98 nm. Therefore, the uncertainty analysis model is then used to compute the corresponding uncertainties in the LED output measurements i.e. light extraction efficiency, LED output power and EM fields.

  14. Unscented transform-based uncertainty analysis of rotating coil transducers for field mapping

    NASA Astrophysics Data System (ADS)

    Arpaia, P.; De Matteis, E.; Schiano Lo Moriello, R.

    2016-03-01

    The uncertainty of a rotating coil transducer for magnetic field mapping is analyzed. Unscented transform and statistical design of experiments are combined to determine magnetic field expectation, standard uncertainty, and separate contributions of the uncertainty sources. For nonlinear measurement models, the unscented transform-based approach is more error-proof than the linearization underlying the "Guide to the expression of Uncertainty in Measurements" (GUMs), owing to the absence of model approximations and derivatives computation. When GUM assumptions are not met, the deterministic sampling strategy strongly reduces computational burden with respect to Monte Carlo-based methods proposed by the Supplement 1 of the GUM. Furthermore, the design of experiments and the associated statistical analysis allow the uncertainty sources domain to be explored efficiently, as well as their significance and single contributions to be assessed for an effective setup configuration. A straightforward experimental case study highlights that a one-order-of-magnitude reduction in the relative uncertainty of the coil area produces a decrease in uncertainty of the field mapping transducer by a factor of 25 with respect to the worst condition. Moreover, about 700 trials and the related processing achieve results corresponding to 5 × 106 brute-force Monte Carlo simulations.

  15. A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts

    NASA Astrophysics Data System (ADS)

    Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel

    2016-04-01

    Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.

  16. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    SciTech Connect

    Yankov, Artem; Collins, Benjamin; Klein, Markus; Jessee, Matthew A.; Zwermann, Winfried; Velkov, Kiril; Pautz, Andreas; Downar, Thomas

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor and in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.

  17. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; Jessee, Matthew A.; Zwermann, Winfried; Velkov, Kiril; Pautz, Andreas; Downar, Thomas

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  18. Unscented transform-based uncertainty analysis of rotating coil transducers for field mapping.

    PubMed

    Arpaia, P; De Matteis, E; Schiano Lo Moriello, R

    2016-03-01

    The uncertainty of a rotating coil transducer for magnetic field mapping is analyzed. Unscented transform and statistical design of experiments are combined to determine magnetic field expectation, standard uncertainty, and separate contributions of the uncertainty sources. For nonlinear measurement models, the unscented transform-based approach is more error-proof than the linearization underlying the "Guide to the expression of Uncertainty in Measurements" (GUMs), owing to the absence of model approximations and derivatives computation. When GUM assumptions are not met, the deterministic sampling strategy strongly reduces computational burden with respect to Monte Carlo-based methods proposed by the Supplement 1 of the GUM. Furthermore, the design of experiments and the associated statistical analysis allow the uncertainty sources domain to be explored efficiently, as well as their significance and single contributions to be assessed for an effective setup configuration. A straightforward experimental case study highlights that a one-order-of-magnitude reduction in the relative uncertainty of the coil area produces a decrease in uncertainty of the field mapping transducer by a factor of 25 with respect to the worst condition. Moreover, about 700 trials and the related processing achieve results corresponding to 5 × 10(6) brute-force Monte Carlo simulations.

  19. Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Mundarain, D.

    1993-01-01

    We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.

  20. Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.

    SciTech Connect

    Weirs, V. Gregory

    2014-03-01

    This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.

  1. Uncertainty quantification in the presence of limited climate model data with discontinuities.

    SciTech Connect

    Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

    2009-12-01

    Uncertainty quantification in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We develop a methodology that performs uncertainty quantification in this context in the presence of limited data.

  2. Evacuation decision-making: process and uncertainty

    SciTech Connect

    Mileti, D.; Sorensen, J.; Bogard, W.

    1985-09-01

    The purpose was to describe the processes of evacuation decision-making, identify and document uncertainties in that process and discuss implications for federal assumption of liability for precautionary evacuations at nuclear facilities under the Price-Anderson Act. Four major categories of uncertainty are identified concerning the interpretation of hazard, communication problems, perceived impacts of evacuation decisions and exogenous influences. Over 40 historical accounts are reviewed and cases of these uncertainties are documented. The major findings are that all levels of government, including federal agencies experience uncertainties in some evacuation situations. Second, private sector organizations are subject to uncertainties at a variety of decision points. Third, uncertainties documented in the historical record have provided the grounds for liability although few legal actions have ensued. Finally it is concluded that if liability for evacuations is assumed by the federal government, the concept of a ''precautionary'' evacuation is not useful in establishing criteria for that assumption. 55 refs., 1 fig., 4 tabs.

  3. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  4. Incorporating Forecast Uncertainty in Utility Control Center

    SciTech Connect

    Makarov, Yuri V.; Etingov, Pavel V.; Ma, Jian

    2014-07-09

    Uncertainties in forecasting the output of intermittent resources such as wind and solar generation, as well as system loads are not adequately reflected in existing industry-grade tools used for transmission system management, generation commitment, dispatch and market operation. There are other sources of uncertainty such as uninstructed deviations of conventional generators from their dispatch set points, generator forced outages and failures to start up, load drops, losses of major transmission facilities and frequency variation. These uncertainties can cause deviations from the system balance, which sometimes require inefficient and costly last minute solutions in the near real-time timeframe. This Chapter considers sources of uncertainty and variability, overall system uncertainty model, a possible plan for transition from deterministic to probabilistic methods in planning and operations, and two examples of uncertainty-based fools for grid operations.This chapter is based on work conducted at the Pacific Northwest National Laboratory (PNNL)

  5. Capturing the uncertainty in adversary attack simulations.

    SciTech Connect

    Darby, John L.; Brooks, Traci N.; Berry, Robert Bruce

    2008-09-01

    This work provides a comprehensive uncertainty technique to evaluate uncertainty, resulting in a more realistic evaluation of PI, thereby requiring fewer resources to address scenarios and allowing resources to be used across more scenarios. For a given set of dversary resources, two types of uncertainty are associated with PI for a scenario: (1) aleatory (random) uncertainty for detection probabilities and time delays and (2) epistemic (state of knowledge) uncertainty for the adversary resources applied during an attack. Adversary esources consist of attributes (such as equipment and training) and knowledge about the security system; to date, most evaluations have assumed an adversary with very high resources, adding to the conservatism in the evaluation of PI. The aleatory uncertainty in PI is ddressed by assigning probability distributions to detection probabilities and time delays. A numerical sampling technique is used to evaluate PI, addressing the repeated variable dependence in the equation for PI.

  6. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    SciTech Connect

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip

    2015-04-15

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.

  7. Bayesian Uncertainty Analyses Via Deterministic Model

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  8. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    PubMed

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  9. Uncertainty quantification for proton-proton fusion in chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Acharya, B.; Carlsson, B. D.; Ekström, A.; Forssén, C.; Platter, L.

    2016-09-01

    We compute the S-factor of the proton-proton (pp) fusion reaction using chiral effective field theory (χEFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the pp cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of χEFT, (iii) the systematic uncertainty due to the χEFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon-nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold S-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent S-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the statistical correlations between the S-factor and few-nucleon observables such as the binding energies and point-proton radii of 2,3H and 3He as well as the D-state probability and quadrupole moment of 2H, and the β-decay of 3H. We find that, with the state-of-the-art optimization of the nuclear Hamiltonian, the statistical uncertainty in the threshold S-factor cannot be reduced beyond 0.7%.

  10. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions.

  11. Neural coding of uncertainty and probability.

    PubMed

    Ma, Wei Ji; Jazayeri, Mehrdad

    2014-01-01

    Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability.

  12. Neural coding of uncertainty and probability.

    PubMed

    Ma, Wei Ji; Jazayeri, Mehrdad

    2014-01-01

    Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability. PMID:25032495

  13. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  14. Whitepaper on Uncertainty Quantification for MPACT

    SciTech Connect

    Williams, Mark L.

    2015-12-17

    The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.

  15. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  16. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  17. Vibration and stress analysis in the presence of structural uncertainty

    NASA Astrophysics Data System (ADS)

    Langley, R. S.

    2009-08-01

    At medium to high frequencies the dynamic response of a built-up engineering system, such as an automobile, can be sensitive to small random manufacturing imperfections. Ideally the statistics of the system response in the presence of these uncertainties should be computed at the design stage, but in practice this is an extremely difficult task. In this paper a brief review of the methods available for the analysis of systems with uncertainty is presented, and attention is then focused on two particular "non-parametric" methods: statistical energy analysis (SEA), and the hybrid method. The main governing equations are presented, and a number of example applications are considered, ranging from academic benchmark studies to industrial design studies.

  18. Multilevel design optimization and the effect of epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Nesbit, Benjamin Edward

    This work presents the state of the art in hierarchically decomposed multilevel optimization. This work is expanded with the inclusion of evidence theory with the multilevel framework for the quantification of epistemic uncertainty. The novel method, Evidence-Based Multilevel Design optimization, is then used to solve two analytical optimization problems. This method is also used to explore the effect of the belief structure on the final solution. A methodology is presented to reduce the costs of evidence-based optimization through manipulation of the belief structure. In addition, a transport aircraft wing is also solved with multilevel optimization without uncertainty. This complex, real world optimization problem shows the capability of decomposed multilevel framework to reduce costs of solving computationally expensive problems with black box analyses.

  19. Dealing with uncertainties in angles-only initial orbit determination

    NASA Astrophysics Data System (ADS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2016-08-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map uncertainties from the observation space to the state space. When a minimum set of observations is available, DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available, high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  20. Uncertainty component evaluation in conventional microbiological qualitative measurements.

    PubMed

    Ka, Charlotte Chan Tak

    2011-01-01

    To couple method performance and QA in microbiological testing, uncertainty profiles have been developed according to relevant LODs and their confidence intervals. Percentage probability of failure is proposed to express this uncertainty. Analysis variance is divided into four categories: uncertainty originating from the sample, uncertainty originating from variations in procedure, uncertainty originating from the measurement system, and uncertainty originating in repeatabilitylreproducibility.

  1. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  2. Uncertainty relations for general unitary operators

    NASA Astrophysics Data System (ADS)

    Bagchi, Shrobona; Pati, Arun Kumar

    2016-10-01

    We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of a Hilbert space. We show that our bounds are tighter in various cases than the ones existing in the current literature. Using the uncertainty relation for the unitary operators, we obtain the tight state-independent lower bound for the uncertainty of two Pauli observables and anticommuting observables in higher dimensions. With regard to the minimum-uncertainty states, we derive the minimum-uncertainty state equation by the analytic method and relate this to the ground-state problem of the Harper Hamiltonian. Furthermore, the higher-dimensional limit of the uncertainty relations and minimum-uncertainty states are explored. From an operational point of view, we show that the uncertainty in the unitary operator is directly related to the visibility of quantum interference in an interferometer where one arm of the interferometer is affected by a unitary operator. This shows a principle of preparation uncertainty, i.e., for any quantum system, the amount of visibility for two general noncommuting unitary operators is nontrivially upper bounded.

  3. Responding to uncertainty in nursing practice.

    PubMed

    Thompson, C; Dowding, D

    2001-10-01

    Uncertainty is a fact of life for practising clinicians and cannot be avoided. This paper outlines the model of uncertainty presented by Katz (1988, Cambridge University Press, Cambridge, UK. pp. 544-565) and examines the descriptive and normative power of three broad theoretical and strategic approaches to dealing with uncertainty: rationality, bounded rationality and intuition. It concludes that nursing research and development (R&D) must acknowledge uncertainty more fully in its R&D agenda and that good-quality evaluation studies which directly compare intuitive with rational-analytical approaches for given clinical problems should be a dominant feature of future R&D.

  4. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  5. Uncertainty Assessment at BC Cribs at Hanford Using the ASCEM Toolset

    NASA Astrophysics Data System (ADS)

    Freedman, V. L.; Rockhold, M. L.; Chen, X.; Schuchardt, K.; Pau, G.; Vesselinov, V. V.; Porter, E.; Waichler, S.; Freshley, M.; Gorton, I.

    2012-12-01

    Uncertainty assessments in subsurface applications typically neglect uncertainty in the conceptual model, and attribute uncertainty to errors in parameters and inputs. At the BC Cribs site at Hanford in southeastern Washington State, conceptualization of the system is highly uncertain because only sparse information is available for the geologic conceptual model and the physical and chemical properties of the sediments. In this contribution, uncertainty in the conceptual model is explored using the ASCEM (Advanced Simulation Capability for Environmental Management) toolset. The ASCEM toolset includes a high performance flow and reactive transport simulator (Amanzi), as well as a user environment called Akuna. Akuna provides a range of tools to manage environmental and simulator data sets, perform model setup, manage simulation data, and visualize results. Core toolsets beneath the user interface provide algorithms for performing sensitivity analyses, parameter estimation, and uncertainty quantification. In this contribution, the uncertainty in technetium-99 transport through a three-dimensional, heterogeneous vadose-zone system is quantified with Monte Carlo simulation. Results show that significant prediction uncertainty in simulated concentrations can be introduced by conceptual model variation. It is also shown that the ASCEM toolset represents an integrated modeling environment that facilitates model setup, parameter optimization, and uncertainty analyses through high-performance computing.

  6. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  7. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  8. Uncertainty Quantification via Random Domain Decomposition and Probabilistic Collocation on Sparse Grids

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2010-09-01

    Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.

  9. Evolution Time and Energy Uncertainty

    ERIC Educational Resources Information Center

    Boykin, Timothy B.; Kharche, Neerav; Klimeck, Gerhard

    2007-01-01

    Often one needs to calculate the evolution time of a state under a Hamiltonian with no explicit time dependence when only numerical methods are available. In cases such as this, the usual application of Fermi's golden rule and first-order perturbation theory is inadequate as well as being computationally inconvenient. Instead, what one needs are…

  10. Solving navigational uncertainty using grid cells on robots.

    PubMed

    Milford, Michael J; Wiles, Janet; Wyeth, Gordon F

    2010-01-01

    To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our

  11. Solving navigational uncertainty using grid cells on robots.

    PubMed

    Milford, Michael J; Wiles, Janet; Wyeth, Gordon F

    2010-11-11

    To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our

  12. Uncertainty in surface water flood risk modelling

    NASA Astrophysics Data System (ADS)

    Butler, J. B.; Martin, D. N.; Roberts, E.; Domuah, R.

    2009-04-01

    uniform flow formulae (Manning's Equation) to direct flow over the model domain, sourcing water from the channel or sea so as to provide a detailed representation of river and coastal flood risk. The initial development step was to include spatially-distributed rainfall as a new source term within the model domain. This required optimisation to improve computational efficiency, given the ubiquity of ‘wet' cells early on in the simulation. Collaboration with UK water companies has provided detailed drainage information, and from this a simplified representation of the drainage system has been included in the model via the inclusion of sinks and sources of water from the drainage network. This approach has clear advantages relative to a fully coupled method both in terms of reduced input data requirements and computational overhead. Further, given the difficulties associated with obtaining drainage information over large areas, tests were conducted to evaluate uncertainties associated with excluding drainage information and the impact that this has upon flood model predictions. This information can be used, for example, to inform insurance underwriting strategies and loss estimation as well as for emergency response and planning purposes. The Flowroute surface-water flood risk platform enables efficient mapping of areas sensitive to flooding from high-intensity rainfall events due to topography and drainage infrastructure. As such, the technology has widespread potential for use as a risk mapping tool by the UK Environment Agency, European Member States, water authorities, local governments and the insurance industry. Keywords: Surface water flooding, Model Uncertainty, Insurance Underwriting, Flood inundation modelling, Risk mapping.

  13. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    SciTech Connect

    Marzouk, Youssef M.; Coles, T.; Spantini, A.; Tosatto, L.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local and long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing

  14. Computers and Computer Resources.

    ERIC Educational Resources Information Center

    Bitter, Gary

    1980-01-01

    This resource directory provides brief evaluative descriptions of six popular home computers and lists selected sources of educational software, computer books, and magazines. For a related article on microcomputers in the schools, see p53-58 of this journal issue. (SJL)

  15. Inverse Sensitivity/Uncertainty Methods Development for Nuclear Fuel Cycle Applications

    NASA Astrophysics Data System (ADS)

    Arbanas, G.; Dunn, M. E.; Williams, M. L.

    2014-04-01

    The Standardized Computer Analyses for Licensing Evaluation (SCALE) software package developed at the Oak Ridge National Laboratory includes codes that propagate uncertainties available in the nuclear data libraries to compute uncertainties in nuclear application performance parameters. We report on our recent efforts to extend this capability to develop an inverse sensitivity/uncertainty (IS/U) methodology that identifies the improvements in nuclear data that are needed to compute application responses within prescribed tolerances, while minimizing the cost of such data improvements. We report on our progress to date and present a simple test case for our method. Our methodology is directly applicable to thermal and intermediate neutron energy systems because it addresses the implicit neutron resonance self-shielding effects that are essential to accurate modeling of thermal and intermediate systems. This methodology is likely to increase the efficiency of nuclear data efforts.

  16. Hydrological model uncertainty assessment in southern Africa

    NASA Astrophysics Data System (ADS)

    Hughes, D. A.; Kapangaziwiri, E.; Sawunyama, T.

    2010-06-01

    The importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures used in the southern Africa region. The region is characterized by a paucity of accurate data and limited human resources, but the need for informed development decisions is critical to social and economic development. One of the main sources of uncertainty is related to the estimation of the parameters of hydrological models. This paper proposes a framework for establishing parameter values, exploring parameter inter-dependencies and setting parameter uncertainty bounds for a monthly time-step rainfall-runoff model (Pitman model) that is widely used in the region. The method is based on well-documented principles of sensitivity and uncertainty analysis, but recognizes the limitations that exist within the region (data scarcity and accuracy, model user attitudes, etc.). Four example applications taken from different climate and physiographic regions of South Africa illustrate that the methods are appropriate for generating behavioural stream flow simulations which include parameter uncertainty. The parameters that dominate the model response and their degree of uncertainty vary between regions. Some of the results suggest that the uncertainty bounds will be too wide for effective water resources decision making. Further work is required to reduce some of the subjectivity in the methods and to investigate other approaches for constraining the uncertainty. The paper recognizes that probability estimates of uncertainty and methods to include input climate data uncertainties need to be incorporated into the framework in the future.

  17. Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty

    SciTech Connect

    Porter, D.W.

    1996-04-01

    Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information from non-invasive and minimally invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the author has developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further applications include an army depot at Letterkenney, PA and commercial industrial sites.

  18. A new algorithm for importance analysis of the inputs with distribution parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Li, Luyi; Lu, Zhenzhou

    2016-10-01

    Importance analysis is aimed at finding the contributions by the inputs to the uncertainty in a model output. For structural systems involving inputs with distribution parameter uncertainty, the contributions by the inputs to the output uncertainty are governed by both the variability and parameter uncertainty in their probability distributions. A natural and consistent way to arrive at importance analysis results in such cases would be a three-loop nested Monte Carlo (MC) sampling strategy, in which the parameters are sampled in the outer loop and the inputs are sampled in the inner nested double-loop. However, the computational effort of this procedure is often prohibitive for engineering problem. This paper, therefore, proposes a newly efficient algorithm for importance analysis of the inputs in the presence of parameter uncertainty. By introducing a 'surrogate sampling probability density function (SS-PDF)' and incorporating the single-loop MC theory into the computation, the proposed algorithm can reduce the original three-loop nested MC computation into a single-loop one in terms of model evaluation, which requires substantially less computational effort. Methods for choosing proper SS-PDF are also discussed in the paper. The efficiency and robustness of the proposed algorithm have been demonstrated by results of several examples.

  19. Uncertainty reasoning in expert systems

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1993-01-01

    Intelligent control is a very successful way to transform the expert's knowledge of the type 'if the velocity is big and the distance from the object is small, hit the brakes and decelerate as fast as possible' into an actual control. To apply this transformation, one must choose appropriate methods for reasoning with uncertainty, i.e., one must: (1) choose the representation for words like 'small', 'big'; (2) choose operations corresponding to 'and' and 'or'; (3) choose a method that transforms the resulting uncertain control recommendations into a precise control strategy. The wrong choice can drastically affect the quality of the resulting control, so the problem of choosing the right procedure is very important. From a mathematical viewpoint these choice problems correspond to non-linear optimization and are therefore extremely difficult. In this project, a new mathematical formalism (based on group theory) is developed that allows us to solve the problem of optimal choice and thus: (1) explain why the existing choices are really the best (in some situations); (2) explain a rather mysterious fact that fuzzy control (i.e., control based on the experts' knowledge) is often better than the control by these same experts; and (3) give choice recommendations for the cases when traditional choices do not work.

  20. Communicating Storm Surge Forecast Uncertainty

    NASA Astrophysics Data System (ADS)

    Troutman, J. A.; Rhome, J.

    2015-12-01

    When it comes to tropical cyclones, storm surge is often the greatest threat to life and property along the coastal United States. The coastal population density has dramatically increased over the past 20 years, putting more people at risk. Informing emergency managers, decision-makers and the public about the potential for wind driven storm surge, however, has been extremely difficult. Recently, the Storm Surge Unit at the National Hurricane Center in Miami, Florida has developed a prototype experimental storm surge watch/warning graphic to help communicate this threat more effectively by identifying areas most at risk for life-threatening storm surge. This prototype is the initial step in the transition toward a NWS storm surge watch/warning system and highlights the inundation levels that have a 10% chance of being exceeded. The guidance for this product is the Probabilistic Hurricane Storm Surge (P-Surge) model, which predicts the probability of various storm surge heights by statistically evaluating numerous SLOSH model simulations. Questions remain, however, if exceedance values in addition to the 10% may be of equal importance to forecasters. P-Surge data from 2014 Hurricane Arthur is used to ascertain the practicality of incorporating other exceedance data into storm surge forecasts. Extracting forecast uncertainty information through analyzing P-surge exceedances overlaid with track and wind intensity forecasts proves to be beneficial for forecasters and decision support.

  1. Strong majorization entropic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Rudnicki, Łukasz; Puchała, Zbigniew; Życzkowski, Karol

    2014-05-01

    We analyze entropic uncertainty relations in a finite-dimensional Hilbert space and derive several strong bounds for the sum of two entropies obtained in projective measurements with respect to any two orthogonal bases. We improve the recent bounds by Coles and Piani [P. Coles and M. Piani, Phys. Rev. A 89, 022112 (2014), 10.1103/PhysRevA.89.022112], which are known to be stronger than the well-known result of Maassen and Uffink [H. Maassen and J. B. M. Uffink, Phys. Rev. Lett. 60, 1103 (1988), 10.1103/PhysRevLett.60.1103]. Furthermore, we find a bound based on majorization techniques, which also happens to be stronger than the recent results involving the largest singular values of submatrices of the unitary matrix connecting both bases. The first set of bounds gives better results for unitary matrices close to the Fourier matrix, while the second one provides a significant improvement in the opposite sectors. Some results derived admit generalization to arbitrary mixed states, so that corresponding bounds are increased by the von Neumann entropy of the measured state. The majorization approach is finally extended to the case of several measurements.

  2. The Species Delimitation Uncertainty Principle

    PubMed Central

    Adams, Byron J.

    2001-01-01

    If, as Einstein said, "it is the theory which decides what we can observe," then "the species problem" could be solved by simply improving our theoretical definition of what a species is. However, because delimiting species entails predicting the historical fate of evolutionary lineages, species appear to behave according to the Heisenberg Uncertainty Principle, which states that the most philosophically satisfying definitions of species are the least operational, and as species concepts are modified to become more operational they tend to lose their philosophical integrity. Can species be delimited operationally without losing their philosophical rigor? To mitigate the contingent properties of species that tend to make them difficult for us to delimit, I advocate a set of operations that takes into account the prospective nature of delimiting species. Given the fundamental role of species in studies of evolution and biodiversity, I also suggest that species delimitation proceed within the context of explicit hypothesis testing, like other scientific endeavors. The real challenge is not so much the inherent fallibility of predicting the future but rather adequately sampling and interpreting the evidence available to us in the present. PMID:19265874

  3. On the dominant uncertainty source of climate change projections at the local scale

    NASA Astrophysics Data System (ADS)

    Fatichi, Simone; Ivanov, Valeriy; Paschalis, Athanasios; Molnar, Peter; Rimkus, Stefan; Kim, Jongho; Peleg, Nadav; Burlando, Paolo; Caporali, Enrica

    2016-04-01

    Decision makers and stakeholders are usually concerned about climate change projections at local spatial scales and fine temporal resolutions. This contrasts with the reliability of climate models, which is typically higher at the global and regional scales, Therefore, there is a demand for advanced methodologies that offer the capability of transferring predictions of climate models and relative uncertainty to scales commensurate with practical applications and for higher order statistics (e.g., few square kilometres and sub-daily scale). A stochastic downscaling technique that makes use of an hourly weather generator (AWE-GEN) and of a Bayesian methodology to weight realizations from different climate models is used to generate local scale meteorological time series of plausible "futures". We computed factors of change from realizations of 32 climate models used in the Coupled Model Intercomparison Project Phase 5 (CMIP5) and for different emission scenarios (RCP 4.5 and RCP 8.5). Future climate projections for several meteorological variables (precipitation, air temperature, relative humidity, shortwave radiation) are simulated at three locations characterized by remarkably different climates, Zurich (Switzlerand), Miami and San Francisco (USA). The methodology is designed to partition three main sources of uncertainty: uncertainty due to climate models (model epistemic uncertainty), anthropogenic forcings (scenario uncertainty), and internal climate variability (stochastic uncertainty). The three types of uncertainty sources are considered as dependent, implicitly accounting for possible co-variances among the sources. For air temperature, the magnitude of the different uncertainty sources is comparable for mid-of-the-century projections, while scenario uncertainty dominates at large lead-times. The dominant source of uncertainty for changes in precipitation mean and extremes is internal climate variability, which is accounting for more than 80% of the total

  4. The detectability of brown dwarfs - Predictions and uncertainties

    NASA Technical Reports Server (NTRS)

    Nelson, L. A.; Rappaport, S.; Joss, P. C.

    1993-01-01

    In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.

  5. PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility

    SciTech Connect

    Piyush Sabharwall; Richard Skifton; Carl Stoots; Eung Soo Kim; Thomas Conder

    2013-12-01

    Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart of any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.

  6. Generalized Entropic Uncertainty Relations with Tsallis' Entropy

    NASA Technical Reports Server (NTRS)

    Portesi, M.; Plastino, A.

    1996-01-01

    A generalization of the entropic formulation of the Uncertainty Principle of Quantum Mechanics is considered with the introduction of the q-entropies recently proposed by Tsallis. The concomitant generalized measure is illustrated for the case of phase and number operators in quantum optics. Interesting results are obtained when making use of q-entropies as the basis for constructing generalized entropic uncertainty measures.

  7. Spiritual uncertainty: exemplars of 2 hospice patients.

    PubMed

    Stephenson, Pamela Shockey

    2014-01-01

    Spirituality is important to persons approaching the end of life. The ambiguous nature of dying and spirituality creates many opportunities for uncertainty. This article presents 2 exemplars from hospice patients about the different ways that spiritual uncertainty affected their dying experience. PMID:24919092

  8. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    ERIC Educational Resources Information Center

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  9. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  10. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  11. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  12. The need for model uncertainty analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  13. Assessment of Uncertainty-Infused Scientific Argumentation

    ERIC Educational Resources Information Center

    Lee, Hee-Sun; Liu, Ou Lydia; Pallant, Amy; Roohr, Katrina Crotts; Pryputniewicz, Sarah; Buck, Zoë E.

    2014-01-01

    Though addressing sources of uncertainty is an important part of doing science, it has largely been neglected in assessing students' scientific argumentation. In this study, we initially defined a scientific argumentation construct in four structural elements consisting of claim, justification, uncertainty qualifier, and uncertainty…

  14. The Economic Implications of Carbon Cycle Uncertainty

    SciTech Connect

    Smith, Steven J.; Edmonds, James A.

    2006-10-17

    This paper examines the implications of uncertainty in the carbon-cycle for the cost of stabilizing carbon-dioxide concentrations. We find that uncertainty in our understanding of the carbon-dioxide has significant implications for the costs of a climate stabilization policy, equivalent to a change in concentration target of up to 100 ppmv.

  15. Micro-Pulse Lidar Signals: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)

    2002-01-01

    Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  17. Identifying uncertainties in Arctic climate change projections

    NASA Astrophysics Data System (ADS)

    Hodson, Daniel L. R.; Keeley, Sarah P. E.; West, Alex; Ridley, Jeff; Hawkins, Ed; Hewitt, Helene T.

    2013-06-01

    Wide ranging climate changes are expected in the Arctic by the end of the 21st century, but projections of the size of these changes vary widely across current global climate models. This variation represents a large source of uncertainty in our understanding of the evolution of Arctic climate. Here we systematically quantify and assess the model uncertainty in Arctic climate changes in two CO2 doubling experiments: a multimodel ensemble (CMIP3) and an ensemble constructed using a single model (HadCM3) with multiple parameter perturbations (THC-QUMP). These two ensembles allow us to assess the contribution that both structural and parameter variations across models make to the total uncertainty and to begin to attribute sources of uncertainty in projected changes. We find that parameter uncertainty is an major source of uncertainty in certain aspects of Arctic climate. But also that uncertainties in the mean climate state in the 20th century, most notably in the northward Atlantic ocean heat transport and Arctic sea ice volume, are a significant source of uncertainty for projections of future Arctic change. We suggest that better observational constraints on these quantities will lead to significant improvements in the precision of projections of future Arctic climate change.

  18. Disturbance, the uncertainty principle and quantum optics

    NASA Technical Reports Server (NTRS)

    Martens, Hans; Demuynck, Willem M.

    1993-01-01

    It is shown how a disturbance-type uncertainty principle can be derived from an uncertainty principle for joint measurements. To achieve this, we first clarify the meaning of 'inaccuracy' and 'disturbance' in quantum mechanical measurements. The case of photon number and phase is treated as an example, and it is applied to a quantum non-demolition measurement using the optical Kerr effect.

  19. Gamma-Ray Telescope and Uncertainty Principle

    ERIC Educational Resources Information Center

    Shivalingaswamy, T.; Kagali, B. A.

    2012-01-01

    Heisenberg's Uncertainty Principle is one of the important basic principles of quantum mechanics. In most of the books on quantum mechanics, this uncertainty principle is generally illustrated with the help of a gamma ray microscope, wherein neither the image formation criterion nor the lens properties are taken into account. Thus a better…

  20. Uncertainty--We Do Need It.

    ERIC Educational Resources Information Center

    Cothern, C. Richard; Cothern, Margaret Fogt

    1980-01-01

    The precision of measurements in today's society is discussed and is related to the range of uncertainty or variation of measurement. Numerous examples provide insight into the margin of error in any measurement. The issue of uncertainty is particularly applicable to levels of toxic chemicals in the environment. (SA)