Practical post-calibration uncertainty analysis: Yucca Mountain, Nevada, USA
NASA Astrophysics Data System (ADS)
James, S. C.; Doherty, J.; Eddebbarh, A.
2009-12-01
The values of parameters in a groundwater flow model govern the precision of predictions of future system behavior. Predictive precision, thus, typically depends on an ability to infer values of system properties from historical measurements through calibration. When such data are scarce, or when their information content with respect to parameters that are most relevant to predictions of interest is weak, predictive uncertainty may be high, even if the model is “calibrated.” Recent advances help recognize this condition, quantitatively evaluate predictive uncertainty, and suggest a path toward improved predictive accuracy by identifying sources of predictive uncertainty and by determining what observations will most effectively reduce this uncertainty. We demonstrate linear and nonlinear predictive error/uncertainty analyses as applied to a groundwater flow model of Yucca Mountain, Nevada, the US’s proposed site for disposal of high-level radioactive waste. Both of these types uncertainty analysis are readily implemented as an adjunct to model calibration with medium to high parameterization density. Linear analysis yields contributions made by each parameter to a prediction’s uncertainty and the worth of different observations, both existing and yet-to-be-gathered, toward reducing this uncertainty. Nonlinear analysis provides more accurate characterization of the uncertainty of model predictions while yielding their (approximate) probability distribution functions. This paper applies the above methods to a prediction of specific discharge and confirms the uncertainty bounds on specific discharge supplied in the Yucca Mountain Project License Application. Furthermore, Monte Carlo simulations confirm that hydrogeologic units thought to be flow barriers have probability distributions skewed toward lower permeabilities.
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Deterministic uncertainty analysis
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig.
Antarctic Photochemistry: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Stewart, Richard W.; McConnell, Joseph R.
1999-01-01
Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.
Analysis of Infiltration Uncertainty
R. McCurley
2003-10-27
The primary objectives of this uncertainty analysis are: (1) to develop and justify a set of uncertain parameters along with associated distributions; and (2) to use the developed uncertain parameter distributions and the results from selected analog site calculations done in ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]) to obtain the net infiltration weighting factors for the glacial transition climate. These weighting factors are applied to unsaturated zone (UZ) flow fields in Total System Performance Assessment (TSPA), as outlined in the ''Total System Performance Assessment-License Application Methods and Approach'' (BSC 2002 [160146], Section 3.1) as a method for the treatment of uncertainty. This report is a scientific analysis because no new and mathematical physical models are developed herein, and it is based on the use of the models developed in or for ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). Any use of the term model refers to those developed in the infiltration numerical model report. TSPA License Application (LA) has included three distinct climate regimes in the comprehensive repository performance analysis for Yucca Mountain: present-day, monsoon, and glacial transition. Each climate regime was characterized using three infiltration-rate maps, including a lower- and upper-bound and a mean value (equal to the average of the two boundary values). For each of these maps, which were obtained based on analog site climate data, a spatially averaged value was also calculated by the USGS. For a more detailed discussion of these infiltration-rate maps, see ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). For this Scientific Analysis Report, spatially averaged values were calculated for the lower-bound, mean, and upper-bound climate analogs only for the glacial transition climate regime, within the
Uncertainty and calibration analysis
Coutts, D.A.
1991-03-01
All measurements contain some deviation from the true value which is being measured. In the common vernacular this deviation between the true value and the measured value is called an inaccuracy, an error, or a mistake. Since all measurements contain errors, it is necessary to accept that there is a limit to how accurate a measurement can be. The undertainty interval combined with the confidence level, is one measure of the accuracy for a measurement or value. Without a statement of uncertainty (or a similar parameter) it is not possible to evaluate if the accuracy of the measurement, or data, is appropriate. The preparation of technical reports, calibration evaluations, and design calculations should consider the accuracy of measurements and data being used. There are many methods to accomplish this. This report provides a consistent method for the handling of measurement tolerances, calibration evaluations and uncertainty calculations. The SRS Quality Assurance (QA) Program requires that the uncertainty of technical data and instrument calibrations be acknowledged and estimated. The QA Program makes some specific technical requirements related to the subject but does not provide a philosophy or method on how uncertainty should be estimated. This report was prepared to provide a technical basis to support the calculation of uncertainties and the calibration of measurement and test equipment for any activity within the Experimental Thermal-Hydraulics (ETH) Group. The methods proposed in this report provide a graded approach for estimating the uncertainty of measurements, data, and calibrations. The method is based on the national consensus standard, ANSI/ASME PTC 19.1.
Uncertainty analysis of thermoreflectance measurements
NASA Astrophysics Data System (ADS)
Yang, Jia; Ziade, Elbara; Schmidt, Aaron J.
2016-01-01
We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.
Uncertainty analysis of thermoreflectance measurements.
Yang, Jia; Ziade, Elbara; Schmidt, Aaron J
2016-01-01
We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.
Uncertainty analysis of thermoreflectance measurements.
Yang, Jia; Ziade, Elbara; Schmidt, Aaron J
2016-01-01
We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica. PMID:26827342
Uncertainties in offsite consequence analysis
Young, M.L.; Harper, F.T.; Lui, C.H.
1996-03-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in
ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS
Smith, F.; Phifer, M.
2011-06-30
The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in
Risk Analysis and Uncertainty: Implications for Counselling
ERIC Educational Resources Information Center
Hassenzahl, David
2004-01-01
Over the past two decades, the risk analysis community has made substantial advances in understanding and describing uncertainty. Uncertainty is ubiquitous, complex, both quantitative and qualitative in nature, and often irreducible. Uncertainty thus creates a challenge when using risk analysis to evaluate the rationality of group and individual…
Sensitivity and Uncertainty Analysis Shell
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Hunt, Randall J.
2012-01-01
Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.
Uncertainty analysis for Ulysses safety evaluation report
NASA Technical Reports Server (NTRS)
Frank, Michael V.
1991-01-01
As part of the effort to review the Ulysses Final Safety Analysis Report and to understand the risk of plutonium release from the Ulysses spacecraft General Purpose Heat Source-Radioisotope Thermal Generator, the Interagency Nuclear Safety Review Panel (INSRP) performed an integrated, quantitative analysis of the uncertainties of the calculated risk of plutonium release from Ulysses. Using state-of-art probabilistic risk assessment technology, the uncertainty analysis accounted for both variability and uncertainty of the key parameters of the risk analysis. The results show that INSRP had high confidence that risk of fatal cancers from potential plutonium release associated with calculated launch and deployment accident scenarios is low.
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
The need for model uncertainty analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Uncertainty Analysis for Photovoltaic Degradation Rates (Poster)
Jordan, D.; Kurtz, S.; Hansen, C.
2014-04-01
Dependable and predictable energy production is the key to the long-term success of the PV industry. PV systems show over the lifetime of their exposure a gradual decline that depends on many different factors such as module technology, module type, mounting configuration, climate etc. When degradation rates are determined from continuous data the statistical uncertainty is easily calculated from the regression coefficients. However, total uncertainty that includes measurement uncertainty and instrumentation drift is far more difficult to determine. A Monte Carlo simulation approach was chosen to investigate a comprehensive uncertainty analysis. The most important effect for degradation rates is to avoid instrumentation that changes over time in the field. For instance, a drifting irradiance sensor, which can be achieved through regular calibration, can lead to a substantially erroneous degradation rates. However, the accuracy of the irradiance sensor has negligible impact on degradation rate uncertainty emphasizing that precision (relative accuracy) is more important than absolute accuracy.
Approach to uncertainty in risk analysis
Rish, W.R.
1988-08-01
In the Fall of 1985 EPA's Office of Radiation Programs (ORP) initiated a project to develop a formal approach to dealing with uncertainties encountered when estimating and evaluating risks to human health and the environment. Based on a literature review of modeling uncertainty, interviews with ORP technical and management staff, and input from experts on uncertainty analysis, a comprehensive approach was developed. This approach recognizes by design the constraints on budget, time, manpower, expertise, and availability of information often encountered in ''real world'' modeling. It is based on the observation that in practice risk modeling is usually done to support a decision process. As such, the approach focuses on how to frame a given risk modeling problem, how to use that framing to select an appropriate mixture of uncertainty analyses techniques, and how to integrate the techniques into an uncertainty assessment that effectively communicates important information and insight to decision-makers. The approach is presented in this report. Practical guidance on characterizing and analyzing uncertainties about model form and quantities and on effectively communicating uncertainty analysis results is included. Examples from actual applications are presented.
A modular approach to linear uncertainty analysis.
Weathers, J B; Luck, R; Weathers, J W
2010-01-01
This paper introduces a methodology to simplify the uncertainty analysis of large-scale problems where many outputs and/or inputs are of interest. The modular uncertainty technique presented here can be utilized to analyze the results spanning a wide range of engineering problems with constant sensitivities within parameter uncertainty bounds. The proposed modular approach provides the same results as the traditional propagation of errors methodology with fewer conceptual steps allowing for a relatively straightforward implementation of a comprehensive uncertainty analysis effort. The structure of the modular technique allows easy integration into most experimental/modeling programs or data acquisition systems. The proposed methodology also provides correlation information between all outputs, thus providing information not easily obtained using the traditional uncertainty process based on analyzing one data reduction equation (DRE)/model at a time. Finally, the paper presents a straightforward methodology to obtain the covariance matrix for the input variables using uncorrelated elemental sources of systematic uncertainties along with uncorrelated sources corresponding to random uncertainties.
Laser wavelength meter: analysis of measurement uncertainties
NASA Astrophysics Data System (ADS)
Skrzeczanowski, Wojciech; Zyczkowski, Marek; Dlugaszek, Andrzej
1999-08-01
Principle of operation of laser radiation wavelength meter based on Fabry-Perot interferometer and linear CCD camera is presented in the paper. A dependence, on the base of which laser wavelength can be calculated, is found and a way of defining of all component uncertainties of a measurement is shown. An analysis of an influence and examples of definition of uncertainties of a measurement for four wavelength meter structural sets of different objective focal lengths are presented.
Uncertainty analysis for Ulysses safety evaluation report
Frank, M.V. )
1991-01-01
As part of the effort to review the Ulysses Final Safety Analysis Report and to understand the risk of plutonium release from the Ulysses spacecraft General Purpose Heat Source---Radioisotope Thermal Generator (GPHS-RTG), the Interagency Nuclear Safety Review Panel (INSRP) and the author performed an integrated, quantitative analysis of the uncertainties of the calculated risk of plutonium release from Ulysses. Using state-of-art probabilistic risk assessment technology, the uncertainty analysis accounted for both variability and uncertainty of the key parameters of the risk analysis. The results show that INSRP had high confidence that risk of fatal cancers from potential plutonium release associated with calculated launch and deployment accident scenarios is low.
Spatial uncertainty analysis of population models
Jager, Yetta; King, Anthony Wayne; Schumaker, Nathan; Ashwood, Tom L; Jackson, Barbara L
2004-01-01
This paper describes an approach for conducting spatial uncertainty analysis of spatial population models, and illustrates the ecological consequences of spatial uncertainty for landscapes with different properties. Spatial population models typically simulate birth, death, and migration on an input map that describes habitat. Typically, only a single reference map is available, but we can imagine that a collection of other, slightly different, maps could be drawn to represent a particular species' habitat. As a first approximation, our approach assumes that spatial uncertainty (i.e., the variation among values assigned to a location by such a collection of maps) is constrained by characteristics of the reference map, regardless of how the map was produced. Our approach produces lower levels of uncertainty than alternative methods used in landscape ecology because we condition our alternative landscapes on local properties of the reference map. Simulated spatial uncertainty was higher near the borders of patches. Consequently, average uncertainty was highest for reference maps with equal proportions of suitable and unsuitable habitat, and no spatial autocorrelation. We used two population viability models to evaluate the ecological consequences of spatial uncertainty for landscapes with different properties. Spatial uncertainty produced larger variation among predictions of a spatially explicit model than those of a spatially implicit model. Spatially explicit model predictions of final female population size varied most among landscapes with enough clustered habitat to allow persistence. In contrast, predictions of population growth rate varied most among landscapes with only enough clustered habitat to support a small population, i.e., near a spatially mediated extinction threshold. We conclude that spatial uncertainty has the greatest effect on persistence when the amount and arrangement of suitable habitat are such that habitat capacity is near the minimum
Uncertainty Analysis of Composite Structures
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Starnes, James H., Jr.; Peters, Jeanne M.
2000-01-01
A two-phase approach and a computational procedure are presented for predicting the variability in the nonlinear response of composite structures associated with variations in the geometric and material parameters of the structure. In the first phase, hierarchical sensitivity analysis is used to identify the major parameters, which have the most effect on the response quantities of interest. In the second phase, the major parameters are taken to be fuzzy parameters, and a fuzzy set analysis is used to determine the range of variation of the response, associated with preselected variations in the major parameters. The effectiveness of the procedure is demonstrated by means of a numerical example of a cylindrical panel with four T-shaped stiffeners and a circular cutout.
Uncertainty Propagation in Hypersonic Vehicle Aerothermoelastic Analysis
NASA Astrophysics Data System (ADS)
Lamorte, Nicolas Etienne
Hypersonic vehicles face a challenging flight environment. The aerothermoelastic analysis of its components requires numerous simplifying approximations. Identifying and quantifying the effect of uncertainties pushes the limits of the existing deterministic models, and is pursued in this work. An uncertainty quantification framework is used to propagate the effects of identified uncertainties on the stability margins and performance of the different systems considered. First, the aeroelastic stability of a typical section representative of a control surface on a hypersonic vehicle is examined. Variability in the uncoupled natural frequencies of the system is modeled to mimic the effect of aerodynamic heating. Next, the stability of an aerodynamically heated panel representing a component of the skin of a generic hypersonic vehicle is considered. Uncertainty in the location of transition from laminar to turbulent flow and the heat flux prediction is quantified using CFD. In both cases significant reductions of the stability margins are observed. A loosely coupled airframe--integrated scramjet engine is considered next. The elongated body and cowl of the engine flow path are subject to harsh aerothermodynamic loading which causes it to deform. Uncertainty associated with deformation prediction is propagated to the engine performance analysis. The cowl deformation is the main contributor to the sensitivity of the propulsion system performance. Finally, a framework for aerothermoelastic stability boundary calculation for hypersonic vehicles using CFD is developed. The usage of CFD enables one to consider different turbulence conditions, laminar or turbulent, and different models of the air mixture, in particular real gas model which accounts for dissociation of molecules at high temperature. The system is found to be sensitive to turbulence modeling as well as the location of the transition from laminar to turbulent flow. Real gas effects play a minor role in the
Identifying sources of uncertainty using covariance analysis
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.
2010-12-01
Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and
Leito, Signe; Mölder, Kadi; Künnapas, Allan; Herodes, Koit; Leito, Ivo
2006-07-14
An ISO GUM measurement uncertainty estimation procedure was developed for a liquid-chromatographic drug quality control method-assay of simvastatin in drug formulation. In quantification of uncertainty components several practical approaches for including difficult-to-estimate uncertainty sources (such as uncertainty due to peak integration, uncertainty due to nonlinearity of the calibration curve, etc.) have been presented. Detailed analysis of contributions of the various uncertainty sources was carried out. The results were calculated based on different definitions of the measurand and it was demonstrated that unequivocal definition of the measurand is essential in order to get rigorous uncertainty estimate. Two different calibration methods - single-point (1P) and five-point (5P) - were used and the obtained uncertainties and uncertainty budgets were compared. Results calculated using 1P and 5P calibrations agree very well. The uncertainty estimate for 1P is only slightly larger than with 5P calibration. PMID:16756985
Uncertainty Analysis for a Jet Flap Airfoil
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Cruz, Josue
2006-01-01
An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Uncertainty analysis for Probable Maximum Precipitation estimates
NASA Astrophysics Data System (ADS)
Micovic, Zoran; Schaefer, Melvin G.; Taylor, George H.
2015-02-01
An analysis of uncertainty associated with Probable Maximum Precipitation (PMP) estimates is presented. The focus of the study is firmly on PMP estimates derived through meteorological analyses and not on statistically derived PMPs. Theoretical PMP cannot be computed directly and operational PMP estimates are developed through a stepwise procedure using a significant degree of subjective professional judgment. This paper presents a methodology for portraying the uncertain nature of PMP estimation by analyzing individual steps within the PMP derivation procedure whereby for each parameter requiring judgment, a set of possible values is specified and accompanied by expected probabilities. The resulting range of possible PMP values can be compared with the previously derived operational single-value PMP, providing measures of the conservatism and variability of the original estimate. To our knowledge, this is the first uncertainty analysis conducted for a PMP derived through meteorological analyses. The methodology was tested on the La Joie Dam watershed in British Columbia. The results indicate that the commonly used single-value PMP estimate could be more than 40% higher when possible changes in various meteorological variables used to derive the PMP are considered. The findings of this study imply that PMP estimates should always be characterized as a range of values recognizing the significant uncertainties involved in PMP estimation. In fact, we do not know at this time whether precipitation is actually upper-bounded, and if precipitation is upper-bounded, how closely PMP estimates approach the theoretical limit.
Confronting deep uncertainties in risk analysis.
Cox, Louis Anthony
2012-10-01
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model-based methods, such as the paradigm of identifying a single "best-fitting" model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications. PMID:22489541
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.
2008-01-01
This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.
Pretest uncertainty analysis for chemical rocket engine tests
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1987-01-01
A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.
Application of uncertainty analysis to cooling tower thermal performance tests
Yost, J.G.; Wheeler, D.E.
1986-01-01
The purpose of this paper is to provide an overview of uncertainty analyses. The following topics are addressed: l. A review and summary of the basic constituents of an uncertainty analysis with definitions and discussion of basic terms; 2. A discussion of the benefits and uses of uncertainty analysis; and 3. Example uncertainty analyses with emphasis on the problems, limitations, and site-specific complications.
Representation of analysis results involving aleatory and epistemic uncertainty.
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis; Sallaberry, Cedric J.
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.
Uncertainty Budget Analysis for Dimensional Inspection Processes (U)
Valdez, Lucas M.
2012-07-26
This paper is intended to provide guidance and describe how to prepare an uncertainty analysis of a dimensional inspection process through the utilization of an uncertainty budget analysis. The uncertainty analysis is stated in the same methodology as that of the ISO GUM standard for calibration and testing. There is a specific distinction between how Type A and Type B uncertainty analysis is used in a general and specific process. All theory and applications are utilized to represent both a generalized approach to estimating measurement uncertainty and how to report and present these estimations for dimensional measurements in a dimensional inspection process. The analysis of this uncertainty budget shows that a well-controlled dimensional inspection process produces a conservative process uncertainty, which can be attributed to the necessary assumptions in place for best possible results.
Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David
2015-08-01
Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. PMID:25890086
Geoengineering to Avoid Overshoot: An Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Tanaka, K.
2009-04-01
Geoengineering (or climate engineering) using stratospheric sulfur injections (Crutzen, 2006) has been called for research in case of an urgent need for stopping global warming when other mitigation efforts were exhausted. Although there are a number of concerns over this idea (e.g. Robock, 2008), it is still useful to consider geoengineering as a possible method to limit warming caused by overshoot. Overshoot is a feature accompanied by low stabilizations scenarios aiming for a stringent target (Rao et al., 2008) in which total radiative forcing temporarily exceeds the target before reaching there. Scenarios achieving a 50% emission reduction by 2050 produces overshoot. Overshoot could cause sustained warming for decades due to the inertia of the climate system. If stratospheric sulfur injections were to be used as a "last resort" to avoid overshoot, what would be the suitable start-year and injection profile of such an intervention? Wigley (2006) examined climate response to combined mitigation/geoengineering scenarios with the intent to avert overshoot. Wigley's analysis demonstrated a basic potential of such a combined mitigation/geoengineering approach to avoid temperature overshoot - however it considered only simplistic sulfur injection profiles (all started in 2010), just one mitigation scenario, and did not examine the sensitivity of the climate response to any underlying uncertainties. This study builds upon Wigley's premise of the combined mitigation/geoengineering approach and brings associated uncertainty into the analysis. First, this study addresses the question as to how much geoengineering intervention would be needed to avoid overshoot by considering associated uncertainty? Then, would a geoengineering intervention of such a magnitude including uncertainty be permissible in considering all the other side effects? This study begins from the supposition that geoengineering could be employed to cap warming at 2.0°C since preindustrial. A few
Uncertainty Analysis with Site Specific Groundwater Models: Experiences and Observations
Brewer, K.
2003-07-15
Groundwater flow and transport predictions are a major component of remedial action evaluations for contaminated groundwater at the Savannah River Site. Because all groundwater modeling results are subject to uncertainty from various causes; quantification of the level of uncertainty in the modeling predictions is beneficial to project decision makers. Complex site-specific models present formidable challenges for implementing an uncertainty analysis.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Micro-Pulse Lidar Signals: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)
2002-01-01
Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Statistical Uncertainty Analysis Applied to Criticality Calculation
Hartini, Entin; Andiwijayakusuma, Dinan; Susmikanti, Mike; Nursinta, A. W.
2010-06-22
In this paper, we present an uncertainty methodology based on a statistical approach, for assessing uncertainties in criticality prediction using monte carlo method due to uncertainties in the isotopic composition of the fuel. The methodology has been applied to criticality calculations with MCNP5 with additional stochastic input of the isotopic fuel composition. The stochastic input were generated using the latin hypercube sampling method based one the probability density function of each nuclide composition. The automatic passing of the stochastic input to the MCNP and the repeated criticality calculation is made possible by using a python script to link the MCNP and our latin hypercube sampling code.
Characterization and evaluation of uncertainty in probabilistic risk analysis
Parry, G.W.; Winter, P.W.
1981-01-01
The sources of uncertainty in probabilistic risk analysis are discussed, using the event and fault-tree methodology as an example. The role of statistics in quantifying these uncertainties is investigated. A class of uncertainties is identified which are, at present, unquantifiable using either classical or Bayesian statistics. it is argued that Bayesian statistics is the more appropriate vehicle for the probabilistic analysis of rare events, and a short review is given with some discussion on the representation of ignorance.
Uncertainty Analysis of Historical Hurricane Data
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2007-01-01
An analysis of variance (ANOVA) study was conducted for historical hurricane data dating back to 1851 that was obtained from the U. S. Department of Commerce National Oceanic and Atmospheric Administration (NOAA). The data set was chosen because it is a large, publicly available collection of information, exhibiting great variability which has made the forecasting of future states, from current and previous states, difficult. The availability of substantial, high-fidelity validation data, however, made for an excellent uncertainty assessment study. Several factors (independent variables) were identified from the data set, which could potentially influence the track and intensity of the storms. The values of these factors, along with the values of responses of interest (dependent variables) were extracted from the data base, and provided to a commercial software package for processing via the ANOVA technique. The primary goal of the study was to document the ANOVA modeling uncertainty and predictive errors in making predictions about hurricane location and intensity 24 to 120 hours beyond known conditions, as reported by the data set. A secondary goal was to expose the ANOVA technique to a broader community within NASA. The independent factors considered to have an influence on the hurricane track included the current and starting longitudes and latitudes (measured in degrees), and current and starting maximum sustained wind speeds (measured in knots), and the storm starting date, its current duration from its first appearance, and the current year fraction of each reading, all measured in years. The year fraction and starting date were included in order to attempt to account for long duration cyclic behaviors, such as seasonal weather patterns, and years in which the sea or atmosphere were unusually warm or cold. The effect of short duration weather patterns and ocean conditions could not be examined with the current data set. The responses analyzed were the storm
Radiometer Design Analysis Based Upon Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Racette, Paul E.; Lang, Roger H.
2004-01-01
This paper introduces a method for predicting the performance of a radiometer design based on calculating the measurement uncertainty. The variety in radiometer designs and the demand for improved radiometric measurements justify the need for a more general and comprehensive method to assess system performance. Radiometric resolution, or sensitivity, is a figure of merit that has been commonly used to characterize the performance of a radiometer. However when evaluating the performance of a calibration design for a radiometer, the use of radiometric resolution has limited application. These limitations are overcome by considering instead the measurement uncertainty. A method for calculating measurement uncertainty for a generic radiometer design including its calibration algorithm is presented. The result is a generalized technique by which system calibration architectures and design parameters can be studied to optimize instrument performance for given requirements and constraints. Example applications demonstrate the utility of using measurement uncertainty as a figure of merit.
Uncertainty Analysis of Thermal Comfort Parameters
NASA Astrophysics Data System (ADS)
Ribeiro, A. Silva; Alves e Sousa, J.; Cox, Maurice G.; Forbes, Alistair B.; Matias, L. Cordeiro; Martins, L. Lages
2015-08-01
International Standard ISO 7730:2005 defines thermal comfort as that condition of mind that expresses the degree of satisfaction with the thermal environment. Although this definition is inevitably subjective, the Standard gives formulae for two thermal comfort indices, predicted mean vote ( PMV) and predicted percentage dissatisfied ( PPD). The PMV formula is based on principles of heat balance and experimental data collected in a controlled climate chamber under steady-state conditions. The PPD formula depends only on PMV. Although these formulae are widely recognized and adopted, little has been done to establish measurement uncertainties associated with their use, bearing in mind that the formulae depend on measured values and tabulated values given to limited numerical accuracy. Knowledge of these uncertainties are invaluable when values provided by the formulae are used in making decisions in various health and civil engineering situations. This paper examines these formulae, giving a general mechanism for evaluating the uncertainties associated with values of the quantities on which the formulae depend. Further, consideration is given to the propagation of these uncertainties through the formulae to provide uncertainties associated with the values obtained for the indices. Current international guidance on uncertainty evaluation is utilized.
Spatial Uncertainty Analysis of Ecological Models
Jager, H.I.; Ashwood, T.L.; Jackson, B.L.; King, A.W.
2000-09-02
The authors evaluated the sensitivity of a habitat model and a source-sink population model to spatial uncertainty in landscapes with different statistical properties and for hypothetical species with different habitat requirements. Sequential indicator simulation generated alternative landscapes from a source map. Their results showed that spatial uncertainty was highest for landscapes in which suitable habitat was rare and spatially uncorrelated. Although, they were able to exert some control over the degree of spatial uncertainty by varying the sampling density drawn from the source map, intrinsic spatial properties (i.e., average frequency and degree of spatial autocorrelation) played a dominant role in determining variation among realized maps. To evaluate the ecological significance of landscape variation, they compared the variation in predictions from a simple habitat model to variation among landscapes for three species types. Spatial uncertainty in predictions of the amount of source habitat depended on both the spatial life history characteristics of the species and the statistical attributes of the synthetic landscapes. Species differences were greatest when the landscape contained a high proportion of suitable habitat. The predicted amount of source habitat was greater for edge-dependent (interior) species in landscapes with spatially uncorrelated(correlated) suitable habitat. A source-sink model demonstrated that, although variation among landscapes resulted in relatively little variation in overall population growth rate, this spatial uncertainty was sufficient in some situations, to produce qualitatively different predictions about population viability (i.e., population decline vs. increase).
Controllable set analysis for planetary landing under model uncertainties
NASA Astrophysics Data System (ADS)
Long, Jiateng; Gao, Ai; Cui, Pingyuan
2015-07-01
Controllable set analysis is a beneficial method in planetary landing mission design by feasible entry state selection in order to achieve landing accuracy and satisfy entry path constraints. In view of the severe impact of model uncertainties on planetary landing safety and accuracy, the purpose of this paper is to investigate the controllable set under uncertainties between on-board model and the real situation. Controllable set analysis under model uncertainties is composed of controllable union set (CUS) analysis and controllable intersection set (CIS) analysis. Definitions of CUS and CIS are demonstrated and computational method of them based on Gauss pseudospectral method is presented. Their applications on entry states distribution analysis under uncertainties and robustness of nominal entry state selection to uncertainties are illustrated by situations with ballistic coefficient, lift-to-drag ratio and atmospheric uncertainty in Mars entry. With analysis of CUS and CIS, the robustness of entry state selection and entry trajectory to model uncertainties can be guaranteed, thus enhancing the safety, reliability and accuracy under model uncertainties during planetary entry and landing.
Uncertainty Analysis in Space Radiation Protection
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2011-01-01
Space radiation is comprised of high energy and charge (HZE) nuclei, protons, and secondary radiation including neutrons. The uncertainties in estimating the health risks from galactic cosmic rays (GCR) are a major limitation to the length of space missions, the evaluation of potential risk mitigation approaches, and application of the As Low As Reasonably Achievable (ALARA) principle. For long duration space missio ns, risks may approach radiation exposure limits, therefore the uncertainties in risk projections become a major safety concern and methodologies used for ground-based works are not deemed to be sufficient. NASA limits astronaut exposures to a 3% risk of exposure induced death (REID) and protects against uncertainties in risks projections using an assessment of 95% confidence intervals in the projection model. We discuss NASA s approach to space radiation uncertainty assessments and applications for the International Space Station (ISS) program and design studies of future missions to Mars and other destinations. Several features of NASA s approach will be discussed. Radiation quality descriptions are based on the properties of radiation tracks rather than LET with probability distribution functions (PDF) for uncertainties derived from radiobiology experiments at particle accelerators. The application of age and gender specific models for individual astronauts is described. Because more than 90% of astronauts are never-smokers, an alternative risk calculation for never-smokers is used and will be compared to estimates for an average U.S. population. Because of the high energies of the GCR limits the benefits of shielding and the limited role expected for pharmaceutical countermeasures, uncertainty reduction continues to be the optimal approach to improve radiation safety for space missions.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
New Programming Environments for Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E. P.; Banta, E. R.; Christensen, S.; Cooley, R. L.; Ely, D. M.; Babendreier, J.; Leavesley, G.; Tonkin, M.; Julich, R.
2005-12-01
We live in a world of faster computers, better GUI's and visualization technology, increasing international cooperation made possible by new digital infrastructure, new agreements between US federal agencies (such as ISCMEM), new European Union programs (such as Harmoniqua), and greater collaboration between US university scientists through CUAHSI. These changes provide new resources for tackling the difficult job of quantifying how well our models perform. This talk introduces new programming environments that take advantage of these new developments and will change the paradigm of how we develop methods for uncertainty evaluation. For example, the programming environments provided by COSU API, JUPITER API, and Sensitivity/Optimization Toolbox provide enormous opportunities for faster and more meaningful evaluation of uncertainties. Instead of waiting years for ideas and theories to be compared in the complex circumstances of interest to resource managers, these new programming environments will expedite the process. In the new paradigm, unproductive ideas and theories will be revealed more quickly, productive ideas and theories will more quickly be used to address our increasingly difficult water resources problems. As examples, two ideas in JUPITER API applications are presented: uncertainty correction factors that account for system complexities not represented in models, and PPR and OPR statistics used to identify new data needed to reduce prediction uncertainty.
Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers
NASA Technical Reports Server (NTRS)
Kleb, Bil; Johnston, Christopher O.
2008-01-01
By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).
Adaptive framework for uncertainty analysis in electromagnetic field measurements.
Prieto, Javier; Alonso, Alonso A; de la Rosa, Ramón; Carrera, Albano
2015-04-01
Misinterpretation of uncertainty in the measurement of the electromagnetic field (EMF) strength may lead to an underestimation of exposure risk or an overestimation of required measurements. The Guide to the Expression of Uncertainty in Measurement (GUM) has internationally been adopted as a de facto standard for uncertainty assessment. However, analyses under such an approach commonly assume unrealistic static models or neglect relevant prior information, resulting in non-robust uncertainties. This study proposes a principled and systematic framework for uncertainty analysis that fuses information from current measurements and prior knowledge. Such a framework dynamically adapts to data by exploiting a likelihood function based on kernel mixtures and incorporates flexible choices of prior information by applying importance sampling. The validity of the proposed techniques is assessed from measurements performed with a broadband radiation meter and an isotropic field probe. The developed framework significantly outperforms GUM approach, achieving a reduction of 28% in measurement uncertainty.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
Estimating the measurement uncertainty in forensic blood alcohol analysis.
Gullberg, Rod G
2012-04-01
For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.
An Approach of Uncertainty Evaluation for Thermal-Hydraulic Analysis
Katsunori Ogura; Hisashi Ninokata
2002-07-01
An approach to evaluate uncertainty systematically for thermal-hydraulic analysis programs is demonstrated. The approach is applied to the Peach Bottom Unit 2 Turbine Trip 2 Benchmark and is validated. (authors)
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Uncertainty analysis of knowledge reductions in rough sets.
Wang, Ying; Zhang, Nan
2014-01-01
Uncertainty analysis is a vital issue in intelligent information processing, especially in the age of big data. Rough set theory has attracted much attention to this field since it was proposed. Relative reduction is an important problem of rough set theory. Different relative reductions have been investigated for preserving some specific classification abilities in various applications. This paper examines the uncertainty analysis of five different relative reductions in four aspects, that is, reducts' relationship, boundary region granularity, rules variance, and uncertainty measure according to a constructed decision table.
Uncertainty Analysis of Knowledge Reductions in Rough Sets
Zhang, Nan
2014-01-01
Uncertainty analysis is a vital issue in intelligent information processing, especially in the age of big data. Rough set theory has attracted much attention to this field since it was proposed. Relative reduction is an important problem of rough set theory. Different relative reductions have been investigated for preserving some specific classification abilities in various applications. This paper examines the uncertainty analysis of five different relative reductions in four aspects, that is, reducts' relationship, boundary region granularity, rules variance, and uncertainty measure according to a constructed decision table. PMID:25258725
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...
Uncertainty analysis technique for OMEGA Dante measurementsa)
NASA Astrophysics Data System (ADS)
May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.
2010-10-01
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Uncertainty Analysis Technique for OMEGA Dante Measurements
May, M J; Widmann, K; Sorce, C; Park, H; Schneider, M
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Uncertainty analysis technique for OMEGA Dante measurements
May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.; Grupa, J.B.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models.
Analysis of automated highway system risks and uncertainties. Volume 5
Sicherman, A.
1994-10-01
This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.
Uncertainty analysis of geothermal energy economics
NASA Astrophysics Data System (ADS)
Sener, Adil Caner
This dissertation research endeavors to explore geothermal energy economics by assessing and quantifying the uncertainties associated with the nature of geothermal energy and energy investments overall. The study introduces a stochastic geothermal cost model and a valuation approach for different geothermal power plant development scenarios. The Monte Carlo simulation technique is employed to obtain probability distributions of geothermal energy development costs and project net present values. In the study a stochastic cost model with incorporated dependence structure is defined and compared with the model where random variables are modeled as independent inputs. One of the goals of the study is to attempt to shed light on the long-standing modeling problem of dependence modeling between random input variables. The dependence between random input variables will be modeled by employing the method of copulas. The study focuses on four main types of geothermal power generation technologies and introduces a stochastic levelized cost model for each technology. Moreover, we also compare the levelized costs of natural gas combined cycle and coal-fired power plants with geothermal power plants. The input data used in the model relies on the cost data recently reported by government agencies and non-profit organizations, such as the Department of Energy, National Laboratories, California Energy Commission and Geothermal Energy Association. The second part of the study introduces the stochastic discounted cash flow valuation model for the geothermal technologies analyzed in the first phase. In this phase of the study, the Integrated Planning Model (IPM) software was used to forecast the revenue streams of geothermal assets under different price and regulation scenarios. These results are then combined to create a stochastic revenue forecast of the power plants. The uncertainties in gas prices and environmental regulations will be modeled and their potential impacts will be
New challenges on uncertainty propagation assessment of flood risk analysis
NASA Astrophysics Data System (ADS)
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Analysis of uncertainties in turbine metal temperature predictions
NASA Technical Reports Server (NTRS)
Stepka, F. S.
1980-01-01
An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.
Sensitivity and Uncertainty Analysis of the keff for VHTR fuel
NASA Astrophysics Data System (ADS)
Han, Tae Young; Lee, Hyun Chul; Noh, Jae Man
2014-06-01
For the uncertainty and sensitivity analysis of PMR200 designed as a VHTR in KAERI, MUSAD was implemented based on the deterministic method in the connection with DeCART/CAPP code system. The sensitivity of the multiplication factor was derived using the classical perturbation theory and the sensitivity coefficients for the individual cross sections were obtained by the adjoint method within the framework of the transport equation. Then, the uncertainty of the multiplication factor was calculated from the product of the covariance matrix and the sensitivity. For the verification calculation of the implemented code, the uncertainty analysis on GODIVA benchmark and PMR200 pin cell problem were carried out and the results were compared with the reference codes, TSUNAMI and McCARD. As a result, they are in a good agreement except the uncertainty by the scattering cross section which was calculated using the different scattering moment.
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Uncertainty analysis in dissolved oxygen modeling in streams.
Hamed, Maged M; El-Beshry, Manar Z
2004-08-01
Uncertainty analysis in surface water quality modeling is an important issue. This paper presents a method based on the first-order reliability method (FORM) to assess the exceedance probability of a target dissolved oxygen concentration in a stream, using a Streeter-Phelps prototype model. Basic uncertainty in the input parameters is considered by representing them as random variables with prescribed probability distributions. Results obtained from FORM analysis compared well with those of the Monte Carlo simulation method. The analysis also presents the stochastic sensitivity of the probabilistic outcome in the form of uncertainty importance factors, and shows how they change with changing simulation time. Furthermore, a parametric sensitivity analysis was conducted to show the effect of selection of different probability distribution functions for the three most important parameters on the design point, exceedance probability, and importance factors.
Analysis and Reduction of Complex Networks Under Uncertainty
Knio, Omar M
2014-04-09
This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.
Uncertainty analysis of power monitoring transit time ultrasonic flow meters
Orosz, A.; Miller, D. W.; Christensen, R. N.; Arndt, S.
2006-07-01
A general uncertainty analysis is applied to chordal, transit time ultrasonic flow meters that are used in nuclear power plant feedwater loops. This investigation focuses on relationships between the major parameters of the flow measurement. For this study, mass flow rate is divided into three components, profile factor, density, and a form of volumetric flow rate. All system parameters are used to calculate values for these three components. Uncertainty is analyzed using a perturbation method. Sensitivity coefficients for major system parameters are shown, and these coefficients are applicable to a range of ultrasonic flow meters used in similar applications. Also shown is the uncertainty to be expected for density along with its relationship to other system uncertainties. One other conclusion is that pipe diameter sensitivity coefficients may be a function of the calibration technique used. (authors)
Estimating an Applying Uncertainties in Probabilistic Tsunami Hazard Analysis (Invited)
NASA Astrophysics Data System (ADS)
Thio, H. K.
2013-12-01
An integral part of the a probabilistic analysis is the formal inclusion of uncertainties, both due to a limited understanding of the physics processes (epistemic) as well their natural variability (aleatory). Because of the strong non-linearity of the tsunami inundation process, it is also important to not only understand the extent of the uncertainties, but also how and where to apply them. We can divide up the uncertainties into several stages: the source, ocean propagation and nearshore/inundation. On the source side, many of the uncertainties are identical to those used in probabilistic seismic hazard analysis (PSHA). However, the details of slip distributions are very significant in tsunami excitation, especially for near-field tsunamis.. We will show several ways of including slip variability, both stochastic and non-stochastic, by developing a probabilistic set of source scenarios. The uncertainties in ocean propagation are less significant since modern algorithms are very successful in modeling open ocean tsunami propagation. However, in the near-shore regime and the inundation, the situation is much more complex. Here, errors in the local elevation models, variability in bottom friction and the omission of built environment can lead to significant errors. Details of the implementation of the tsunami algorithms can yield different results. We will discuss the most significant sources of uncertainty and the alternative ways to implement them using examples for the probabilistic tsunami hazard mapping that we are currently carrying out for the state of California and other regions.
Uncertainty analysis of doses from inhalation of depleted uranium.
Puncher, M; Bailey, M R; Harrison, J D
2008-09-01
Measurements of uranium excreted in urine have been widely used to monitor possible exposures to depleted uranium (DU). This paper describes a comprehensive probabilistic uncertainty analysis of doses determined retrospectively from measurements of DU in urine. Parametric uncertainties in the International Commission on Radiological Protection (ICRP) Human Respiratory Tract Model (HRTM) and ICRP systemic model for uranium were considered in the analysis, together with uncertainties in an alternative model for particle removal from the lungs. Probability distributions were assigned to HRTM parameters based on uncertainties documented in ICRP Publication 66 and elsewhere, including the Capstone study of aerosols produced after DU penetrator impacts. Uncertainties in the uranium systemic model were restricted to transfer rates having the greatest effect on urinary excretion, and hence retrospective dose assessments, over the measurement times considered (10-10(4) d). The overall uncertainty on dose (the ratio of the upper and lower quantiles, q0.975/q0.025) was estimated to be about a factor of 50 at 10 days after intake and about a factor of 10 at 10(3)-10(4) d. The dose to the lung dominated the committed effective dose, with the lung absorption parameters, particularly the slow dissolution rate, ss, dominating the overall uncertainty. The median dose determined from a measurement of 1 ng DU, collected in urine in a 24-h period, varied from 0.1 microSv at 10 d to about 1 mSv at 10(4) d. Despite the large uncertainties, the upper q0.975 quantile for the assessed dose was below 1 mSv up to 5,000 d. PMID:18695411
Parameter uncertainty analysis of a biokinetic model of caesium.
Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A
2015-01-01
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.
Propagation of variance uncertainty calculation for an autopsy tissue analysis
Bruckner, L.A.
1994-07-01
When a radiochemical analysis is reported, it is often accompanied by an uncertainty value that simply reflects the natural variation in the observed counts due to radioactive decay, the so-called counting statistics. However, when the assay procedure is complex or when the number of counts is large, there are usually other important contributors to the total measurement uncertainty that need to be considered. An assay value is almost useless unless it is accompanied by a measure of the uncertainty associated with that value. The uncertainty value should reflect all the major sources of variation and bias affecting the assay and should provide a specified level of confidence. An approach to uncertainty calculation that includes the uncertainty due to instrument calibration, values of the standards, and intermediate measurements as well as counting statistics is presented and applied to the analysis of an autopsy tissue. This approach, usually called propagation of variance, attempts to clearly distinguish between errors that have systematic (bias) effects and those that have random effects on the assays. The effects of these different types of errors are then propagated to the assay using formal statistical techniques. The result is an uncertainty on the assay that has a defensible level of confidence and which can be traced to individual major contributors. However, since only measurement steps are readly quantified and since all models are approximations, it is emphasized that without empirical verification, a propagation of uncertainty model may be just a fancy model with no connection to reality. 5 refs., 1 fig., 2 tab.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. PMID:26456251
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Uncertainty Analysis for RELAP5-3D
Aaron J. Pawel; Dr. George L. Mesina
2011-08-01
In its current state, RELAP5-3D is a 'best-estimate' code; it is one of our most reliable programs for modeling what occurs within reactor systems in transients from given initial conditions. This code, however, remains an estimator. A statistical analysis has been performed that begins to lay the foundation for a full uncertainty analysis. By varying the inputs over assumed probability density functions, the output parameters were shown to vary. Using such statistical tools as means, variances, and tolerance intervals, a picture of how uncertain the results are based on the uncertainty of the inputs has been obtained.
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report)
Gray, A.; Lewandowski, A.; Wendelin, T.
2010-10-01
In 1997, an uncertainty analysis was conducted of the Video Scanning Hartmann Optical Tester (VSHOT). In 2010, we have completed a new analysis, based primarily on the geometric optics of the system, and it shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough mirror panel test. These help to guide the operator in proper setup, and help end-users to understand the data they are provided. We include both the systematic (bias) and random (precision) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors we considered in this study are: target tilt; target face to laser output distance; instrument vertical offset; laser output angle; distance between the tool and the test piece; camera calibration; and laser scanner. These contributing factors were applied to the calculated slope error, focal length, and test article tilt that are generated by the VSHOT data processing. Results show the estimated 2-sigma uncertainty in slope error for a parabolic trough line scan test to be +/-0.2 milliradians; uncertainty in the focal length is +/- 0.1 mm, and the uncertainty in test article tilt is +/- 0.04 milliradians.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Boardman, J.; Jones, J.A.; Harper, F.T.; Young, M.L.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on early health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Uncertainty analysis on photogrammetry-derived national shoreline
NASA Astrophysics Data System (ADS)
Yao, Fang
Photogrammetric shoreline mapping remains the primary method for mapping the national shoreline used by the National Geodetic Survey (NGS) in the National Oceanic and Atmospheric Administration (NOAA). To date, NGS has not conducted a statistical analysis on the photograrnmetry-derived shoreline uncertainty. The aim of this thesis is to develop and test a rigorous total propagated uncertainty (TPU) model for shoreline compiled from both tide-coordinated and non-tide-coordinated aerial imagery using photogrammetric methods. Survey imagery collected over a study site in northeast Maine was used to test the TPU model. The TPU model developed in this thesis can easily be extended to other areas and may facilitate estimation of uncertainty in inundation models and marsh migration models.
The effects of uncertainty on the analysis of atmospheric deposition
Bloyd, C.N. ); Small, M.J.; Henrion, M.; Rubin, E.S. )
1988-01-01
Research efforts on the problem of acid ran are directed at improving current scientific understanding in critical areas, including sources of precursor emissions, the transport and transformation of pollutants in the atmosphere, the deposition of acidic species, and the chemical and biological effects of acid deposition on aquatic systems, materials, forests, crops and human health. The general goal of these research efforts is to characterize the current situation and to develop analytical models which can be used to predict the response of various systems to changes in critical parameters. This paper describes a framework which enables one to characterize uncertainty at each major stage of the modeling process. Following a general presentation of the modeling framework, a description is given of the methods chosen to characterize uncertainty for each major step. Analysis is then performed to illustrate the effects of uncertainty on future lake acidification in the Adirondacks Park area of upstate New York.
Uncertainty analysis of a SFR core with sodium plenum
Canuti, E.; Ivanov, E.; Tiberi, V.; Pignet, S.
2012-07-01
The new concepts of Sodium-cooled Fast Reactors have to reach the Generation IV safety objectives. In this regard the Sodium Void Effect has to be minimized for the future projects of large-size SFR as well as the uncertainties on it. The Inst. of Radiological Protection and Nuclear Safety (IRSN) as technological support of French public authorities is in charge of safety assessment of operating and under construction reactors, as well as future projects. In order to state about the safety of new SFR designs the IRSN must be able to evaluate core parameters and their uncertainties. In this frame a sensitivity and uncertainty study has been performed to evaluate the impact of nuclear data uncertainty on sodium void effect, for the benchmark model of large SFR BN-800. The benchmark parameters (effective multiplication factor and sodium void effect) have been evaluated using two codes, the deterministic code ERANOS and the Monte Carlo code SCALE, while the S/U analysis has been performed only with SCALE. The results of the these studies point out the most relevant cross section uncertainties that affect the SVE and how efforts should be done in increasing the existing nuclear data accuracies. (authors)
Uncertainty analysis for computer model projections of hurricane losses.
Iman, Ronald L; Johnson, Mark E; Watson, Charles C
2005-10-01
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.
Uncertainty Analysis of the Grazing Flow Impedance Tube
NASA Technical Reports Server (NTRS)
Brown, Martha C.; Jones, Michael G.; Watson, Willie R.
2012-01-01
This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.
Treatment of uncertainties in the IPCC: a philosophical analysis
NASA Astrophysics Data System (ADS)
Jebeile, J.; Drouet, I.
2014-12-01
The IPCC produces scientific reports out of findings on climate and climate change. Because the findings are uncertain in many respects, the production of reports requires aggregating assessments of uncertainties of different kinds. This difficult task is currently regulated by the Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. The note recommends that two metrics—i.e. confidence and likelihood— be used for communicating the degree of certainty in findings. Confidence is expressed qualitatively "based on the type, amount, quality, and consistency of evidence […] and the degree of agreement", while likelihood is expressed probabilistically "based on statistical analysis of observations or model results, or expert judgment". Therefore, depending on the evidence evaluated, authors have the choice to present either an assigned level of confidence or a quantified measure of likelihood. But aggregating assessments of uncertainties of these two different kinds express distinct and conflicting methodologies. So the question arises whether the treatment of uncertainties in the IPCC is rationally justified. In order to answer the question, it is worth comparing the IPCC procedures with the formal normative theories of epistemic rationality which have been developed by philosophers. These theories—which include contributions to the philosophy of probability and to bayesian probabilistic confirmation theory—are relevant for our purpose because they are commonly used to assess the rationality of common collective jugement formation based on uncertain knowledge. In this paper we make the comparison and pursue the following objectives: i/we determine whether the IPCC confidence and likelihood can be compared with the notions of uncertainty targeted by or underlying the formal normative theories of epistemic rationality; ii/we investigate whether the formal normative theories of epistemic rationality justify
Uncertainty analysis for seismic hazard in Northern and Central Italy
Lombardi, A.M.; Akinci, A.; Malagnini, L.; Mueller, C.S.
2005-01-01
In this study we examine uncertainty and parametric sensitivity of Peak Ground Acceleration (PGA) and 1-Hz Spectral Acceleration (1-Hz SA) in probabilistic seismic hazard maps (10% probability of exceedance in 50 years) of Northern and Central Italy. The uncertainty in hazard is estimated using a Monte Carlo approach to randomly sample a logic tree that has three input-variables branch points representing alternative values for b-value, maximum magnitude (Mmax) and attenuation relationships. Uncertainty is expressed in terms of 95% confidence band and Coefficient Of Variation (COV). The overall variability of ground motions and their sensitivity to each parameter of the logic tree are investigated. The largest values of the overall 95% confidence band are around 0.15 g for PGA in the Friuli and Northern Apennines regions and around 0.35 g for 1-Hz SA in the Central Apennines. The sensitivity analysis shows that the largest contributor to seismic hazard variability is uncertainty in the choice of ground-motion attenuation relationships, especially in the Friuli Region (???0.10 g) for PGA and in the Friuli and Central Apennines regions (???0.15 g) for 1-Hz SA. This is followed by the variability of the b-value: its main contribution is evident in the Friuli and Central Apennines regions for both 1-Hz SA (???0.15 g) and PGA (???0.10 g). We observe that the contribution of Mmax to seismic hazard variability is negligible, at least for 10% exceedance in 50-years hazard. The overall COV map for PGA shows that the uncertainty in the hazard is larger in the Friuli and Northern Apennine regions, around 20-30%, than the Central Apennines and Northwestern Italy, around 10-20%. The overall uncertainty is larger for the 1-Hz SA map and reaches 50-60% in the Central Apennines and Western Alps.
UNCERTAINTY ANALYSIS FOR THE TECHA RIVER DOSIMETRY SYSTEM
Napier, Bruce A.; Degteva, M. O.; Shagina, N. B.; Anspaugh, L. R.
2013-04-01
Uncertainties in the doses estimated for the members of the Techa River Cohort (TRC) are being estimated with a two-dimensional Monte Carlo approach. In order to provide more accurate and precise estimates of individual dose (and thus more precise estimates of radiation risk) for the members of the TRC, a new dosimetric calculation system, the Techa River Dosimetry System-2009 (TRDS-2009) has been prepared. The deterministic version of the improved dosimetry system TRDS-2009D was basically completed in April 2009. Recent developments in evaluation of dose-response models in light of uncertain dose have highlighted the importance of different types of uncertainties in the development of individual dose estimates. These include uncertain parameters that may be either shared (common to some or all individuals) or unshared (a unique value for each person whose dose is to be estimated) within the dosimetric cohort. The nature of the type of uncertainty may be aleatory (random variability of true values due to stochastic processes) or epistemic (due to lack of complete knowledge about a unique quantity). Finally, there is a need to identify whether the structure of the errors is either related to measurement (the estimate differs from the true value by an error that is stochastically independent of the true value; frequently called classical uncertainty) or related to grouping (the true value varies from the estimate by an error that is random and is independent of the estimate; frequently called Berkson uncertainty). An approach has been developed that identifies the nature of the various input parameters and calculational methods incorporated in the Techa River Dosimetry System (based on the TRDS-2009D implementation), and a stochastic calculation model has been prepared to estimate the uncertainties in the dose estimates. This article reviews the concepts of uncertainty analysis, the equations, and input parameters, and then identifies the authors’ interpretations
Vibration and stress analysis in the presence of structural uncertainty
NASA Astrophysics Data System (ADS)
Langley, R. S.
2009-08-01
At medium to high frequencies the dynamic response of a built-up engineering system, such as an automobile, can be sensitive to small random manufacturing imperfections. Ideally the statistics of the system response in the presence of these uncertainties should be computed at the design stage, but in practice this is an extremely difficult task. In this paper a brief review of the methods available for the analysis of systems with uncertainty is presented, and attention is then focused on two particular "non-parametric" methods: statistical energy analysis (SEA), and the hybrid method. The main governing equations are presented, and a number of example applications are considered, ranging from academic benchmark studies to industrial design studies.
Compositional analysis of lignocellulosic feedstocks. 2. Method uncertainties.
Templeton, David W; Scarlata, Christopher J; Sluiter, Justin B; Wolfrum, Edward J
2010-08-25
The most common procedures for characterizing the chemical components of lignocellulosic feedstocks use a two-stage sulfuric acid hydrolysis to fractionate biomass for gravimetric and instrumental analyses. The uncertainty (i.e., dispersion of values from repeated measurement) in the primary data is of general interest to those with technical or financial interests in biomass conversion technology. The composition of a homogenized corn stover feedstock (154 replicate samples in 13 batches, by 7 analysts in 2 laboratories) was measured along with a National Institute of Standards and Technology (NIST) reference sugar cane bagasse, as a control, using this laboratory's suite of laboratory analytical procedures (LAPs). The uncertainty was evaluated by the statistical analysis of these data and is reported as the standard deviation of each component measurement. Censored and uncensored versions of these data sets are reported, as evidence was found for intermittent instrumental and equipment problems. The censored data are believed to represent the "best case" results of these analyses, whereas the uncensored data show how small method changes can strongly affect the uncertainties of these empirical methods. Relative standard deviations (RSD) of 1-3% are reported for glucan, xylan, lignin, extractives, and total component closure with the other minor components showing 4-10% RSD. The standard deviations seen with the corn stover and NIST bagasse materials were similar, which suggests that the uncertainties reported here are due more to the analytical method used than to the specific feedstock type being analyzed.
A systematic uncertainty analysis for liner impedance eduction technology
NASA Astrophysics Data System (ADS)
Zhou, Lin; Bodén, Hans
2015-11-01
The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.
Uncertainty analysis in WWTP model applications: a critical discussion using an example from design.
Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi
2009-06-01
This study focuses on uncertainty analysis of WWTP models and analyzes the issue of framing and how it affects the interpretation of uncertainty analysis results. As a case study, the prediction of uncertainty involved in model-based design of a wastewater treatment plant is studied. The Monte Carlo procedure is used for uncertainty estimation, for which the input uncertainty is quantified through expert elicitation and the sampling is performed using the Latin hypercube method. Three scenarios from engineering practice are selected to examine the issue of framing: (1) uncertainty due to stoichiometric, biokinetic and influent parameters; (2) uncertainty due to hydraulic behaviour of the plant and mass transfer parameters; (3) uncertainty due to the combination of (1) and (2). The results demonstrate that depending on the way the uncertainty analysis is framed, the estimated uncertainty of design performance criteria differs significantly. The implication for the practical applications of uncertainty analysis in the wastewater industry is profound: (i) as the uncertainty analysis results are specific to the framing used, the results must be interpreted within the context of that framing; and (ii) the framing must be crafted according to the particular purpose of uncertainty analysis/model application. Finally, it needs to be emphasised that uncertainty analysis is no doubt a powerful tool for model-based design among others, however clear guidelines for good uncertainty analysis in wastewater engineering practice are needed. PMID:19447462
Uncertainty analysis of doses from ingestion of plutonium and americium.
Puncher, M; Harrison, J D
2012-02-01
Uncertainty analyses have been performed on the biokinetic model for americium currently used by the International Commission on Radiological Protection (ICRP), and the model for plutonium recently derived by Leggett, considering acute intakes by ingestion by adult members of the public. The analyses calculated distributions of doses per unit intake. Those parameters having the greatest impact on prospective doses were identified by sensitivity analysis; the most important were the fraction absorbed from the alimentary tract, f(1), and rates of uptake from blood to bone surfaces. Probability distributions were selected based on the observed distribution of plutonium and americium in human subjects where possible; the distributions for f(1) reflected uncertainty on the average value of this parameter for non-specified plutonium and americium compounds ingested by adult members of the public. The calculated distributions of effective doses for ingested (239)Pu and (241)Am were well described by log-normal distributions, with doses varying by around a factor of 3 above and below the central values; the distributions contain the current ICRP Publication 67 dose coefficients for ingestion of (239)Pu and (241)Am by adult members of the public. Uncertainty on f(1) values had the greatest impact on doses, particularly effective dose. It is concluded that: (1) more precise data on f(1) values would have a greater effect in reducing uncertainties on doses from ingested (239)Pu and (241)Am, than reducing uncertainty on other model parameter values and (2) the results support the dose coefficients (Sv Bq(-1) intake) derived by ICRP for ingestion of (239)Pu and (241)Am by adult members of the public.
Information gap analysis of flood model uncertainties and regional frequency analysis
NASA Astrophysics Data System (ADS)
Hine, Daniel; Hall, Jim W.
2010-01-01
Flood risk analysis is subject to often severe uncertainties, which can potentially undermine flood management decisions. This paper explores the use of information gap theory to analyze the sensitivity of flood management decisions to uncertainties in flood inundation models and flood frequency analysis. Information gap is a quantified nonprobabilistic theory of robustness. To analyze uncertainties in flood modeling, an energy-bounded information gap model is established and applied first to a simplified uniform channel and then to a more realistic 2-D flood model. Information gap theory is then applied to the estimation of flood discharges using regional frequency analysis. The use of an information gap model is motivated by the notion that hydrologically similar sites are clustered in the space of their L moments. The information gap model is constructed around a parametric statistical flood frequency analysis, resulting in a hybrid model of uncertainty in which natural variability is handled statistically while epistemic uncertainties are represented in the information gap model. The analysis is demonstrated for sites in the Trent catchment, United Kingdom. The analysis is extended to address ungauged catchments, which, because of the attendant uncertainties in flood frequency analysis, are particularly appropriate for information gap analysis. Finally, the information gap model of flood frequency is combined with the treatment of hydraulic model uncertainties in an example of how both sources of uncertainty can be accounted for using information gap theory in a flood risk management decision.
Geoengineering to Avoid Overshoot: An Analysis of Uncertainty
NASA Astrophysics Data System (ADS)
Tanaka, Katsumasa; Cho, Cheolhung; Krey, Volker; Patt, Anthony; Rafaj, Peter; Rao-Skirbekk, Shilpa; Wagner, Fabian
2010-05-01
., 2009) is employed to calculate climate responses including associated uncertainty and to estimate geoengineering profiles to cap the warming at 2°C since preindustrial. The inversion setup for the model ACC2 is used to estimate the uncertain parameters (e.g. climate sensitivity) against associated historical observations (e.g. global-mean surface air temperature). Our preliminary results show that under climate and scenario uncertainties, a geoengineering intervention to avoid an overshoot would be with medium intensity in the latter half of this century (≈ 1 Mt. Pinatubo eruption every 4 years in terms of stratospheric sulfur injections). The start year of geoengineering intervention does not significantly influence the long-term geoengineering profile. However, a geoengineering intervention of the medium intensity could bring about substantial environmental side effects such as the destruction of stratospheric ozone. Our results point to the necessity to pursue persistently mainstream mitigation efforts. 2) Pollution Abatement and Geoengineering The second study examines the potential of geoengineering combined with air clean policy. A drastic air pollution abatement might result in an abrupt warming because it would suddenly remove the tropospheric aerosols which partly offset the background global warming (e.g. Andreae et al, 2005, Raddatz and Tanaka, 2010). This study investigates the magnitude of unrealized warming under a range of policy assumptions and associated uncertainties. Then the profile of geoengineering is estimated to suppress the warming that would be accompanied by clean air policy. This study is the first attempt to explore uncertainty in the warming caused by clean air policy - Kloster et al. (2009), which assess regional changes in climate and hydrological cycle, has not however included associated uncertainties in the analysis. A variety of policy assumptions will be devised to represent various degrees of air pollution abatement. These
Final Report. Analysis and Reduction of Complex Networks Under Uncertainty
Marzouk, Youssef M.; Coles, T.; Spantini, A.; Tosatto, L.
2013-09-30
The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local and long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing
Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low. PMID:16206845
MOUSE (Modular Oriented Uncertainty SystEm) deals with the problem of uncertainties in models that consist of one or more algebraic equations. It was especially designed for use by those with little or no knowledge of computer languages or programming. It is compact (and thus can...
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
Uncertainty analysis of wind-wave predictions in Lake Michigan
NASA Astrophysics Data System (ADS)
Nekouee, Navid; Ataie-Ashtiani, Behzad; Hamidi, Sajad Ahmad
2016-10-01
With all the improvement in wave and hydrodynamics numerical models, the question rises in our mind that how the accuracy of the forcing functions and their input can affect the results. In this paper, a commonly used numerical third-generation wave model, SWAN is applied to predict waves in Lake Michigan. Wind data are analyzed to determine wind variation frequency over Lake Michigan. Wave predictions uncertainty due to wind local effects are compared during a period where wind has a fairly constant speed and direction over the northern and southern basins. The study shows that despite model calibration in Lake Michigan area, the model deficiency arises from ignoring wind effects in small scales. Wave prediction also emphasizes that small scale turbulence in meteorological forces can increase prediction errors by 38%. Wave frequency and coherence analysis show that both models can predict the wave variation time scale with the same accuracy. Insufficient number of meteorological stations can result in neglecting local wind effects and discrepancies in current predictions. The uncertainty of wave numerical models due to input uncertainties and model principals should be taken into account for design risk factors.
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
The Uncertainty in the Local Seismic Response Analysis
Pasculli, A.; Pugliese, A.; Romeo, R. W.; Sano, T.
2008-07-08
In the present paper is shown the influence on the local seismic response analysis exerted by considering dispersion and uncertainty in the seismic input as well as in the dynamic properties of soils. In a first attempt a 1D numerical model is developed accounting for both the aleatory nature of the input motion and the stochastic variability of the dynamic properties of soils. The seismic input is introduced in a non-conventional way through a power spectral density, for which an elastic response spectrum, derived--for instance--by a conventional seismic hazard analysis, is required with an appropriate level of reliability. The uncertainty in the geotechnical properties of soils are instead investigated through a well known simulation technique (Monte Carlo method) for the construction of statistical ensembles. The result of a conventional local seismic response analysis given by a deterministic elastic response spectrum is replaced, in our approach, by a set of statistical elastic response spectra, each one characterized by an appropriate level of probability to be reached or exceeded. The analyses have been carried out for a well documented real case-study. Lastly, we anticipate a 2D numerical analysis to investigate also the spatial variability of soil's properties.
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Results of a 24-inch Hybrid Motor Performance Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Sims, Joseph D.; Coleman, Hugh W.
1998-01-01
The subscale (11 and 24-inch) hybrid motors at the Marshall Space Flight Center (MSFC) have been used as versatile and cost effective testbeds for developing new technology. Comparisons between motor configuration, ignition systems, feed systems, fuel formulations, and nozzle materials have been carried out without detailed consideration as to haw "good" the motor performance data were. For the 250,000 lb/thrust motor developed by the Hybrid Propulsion Demonstration Program consortium, this shortcoming is particularly risky because motor performance will likely be used as put of a set of downselect criteria to choose between competing ignition and feed systems under development. This analysis directly addresses that shortcoming by applying uncertainty analysis techniques to the experimental determination of the characteristic velocity, theoretical characteristic velocity, and characteristic velocity efficiency for a 24-inch motor firing. With the adoption of fuel-lined headends, flow restriction, and aft mixing chambers, state of the an 24-inch hybrid motors have become very efficient However, impossibly high combustion efficiencies (some computed as high as 108%) have been measured in some tests with 11-inch motors. This analysis has given new insight into explaining how these efficiencies were measured to be so high, and into which experimental measurements contribute the most to the overall uncertainty.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014
Foundational methods for model verification and uncertainty analysis (Invited)
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Croke, B. F.; Guillaume, J. H.; Jakeman, J. D.; Shin, M.
2013-12-01
Before embarking on formal methods of uncertainty analysis that may entail unnecessarily restrictive assumptions and sophisticated treatment, prudence dictates exploring one's data, model candidates and applicable objective functions with a mixture of methods as a first step. It seems that there are several foundational methods that warrant more attention in practice and that there is scope for the development of new ones. Ensuing results from a selection of foundational methods may well inform the choice of formal methods and assumptions, or suffice in themselves as an effective appreciation of uncertainty. Through the case of four lumped rainfall-runoff models of varying complexity from several watersheds we illustrate that there are valuable methods, many of them already in open source software, others we have recently developed, which can be invoked to yield valuable insights into model veracity and uncertainty. We show results of using methods of global sensitivity analysis that help: determine whether insensitive parameters impact on predictions and therefore cannot be fixed; and identify which combinations of objective function, dataset and model structure allow insensitive parameters to be estimated. We apply response surface and polynomial chaos methods to yield knowledge of the models' response surfaces and parameter interactions, thereby informing model redesign. A new approach to model structure discrimination is presented based on Pareto methods and cross-validation. It reveals which model structures are acceptable in the sense that they are non-dominated by other structures across calibration and validation periods and across catchments according to specified performance criteria. Finally we present and demonstrate a falsification approach that shows the value of examining scenarios of model structures and parameters to identify any change that might have a specified effect on a prediction.
Uncertainty analysis using corrected first-order approximation method
NASA Astrophysics Data System (ADS)
Tyagi, Aditya; Haan, C. T.
2001-06-01
Application of uncertainty and reliability analysis is an essential part of many problems related to modeling and decision making in the area of environmental and water resources engineering. Computational efficiency, understandability, and easier application have made the first-order approximation (FOA) method a favored tool for uncertainty analysis. In many instances, situations may arise where the accuracy of FOA estimates becomes questionable. Often the FOA application is considered acceptable if the coefficient of variation of the uncertain parameter(s) is <0.2, but this criterion is not correct in all the situations. Analytical as well as graphical relations for relative error are developed and presented for a generic power function that can be used as a guide for judging the suitability of the FOA for a specified acceptable error of estimation. Further, these analytical and graphical relations enable FOA estimates for means and variances of model components to be corrected to their true values. Using these corrected values of means and variances for model components, one can determine the exact values of the mean and variance of an output random variable. This technique is applicable when an output variable is a function of several independent random variables in multiplicative, additive, or in combined (multiplicative and additive) forms. Two examples are given to demonstrate the application of the technique.
Reducing spatial uncertainty in climatic maps through geostatistical analysis
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Ninyerola, Miquel; Pons, Xavier
2014-05-01
Climatic maps from meteorological stations and geographical co-variables can be obtained through correlative models (Ninyerola et al., 2000)*. Nevertheless, the spatial uncertainty of the resulting maps could be reduced. The present work is a new stage over those approaches aiming to study how to obtain better results while characterizing spatial uncertainty. The study area is Catalonia (32000 km2), a region with highly variable relief (0 to 3143 m). We have used 217 stations (321 to 1244 mm) to model the annual precipitation in two steps: 1/ multiple regression using geographical variables (elevation, distance to the coast, latitude, etc) and 2/ refinement of the results by adding the spatial interpolation of the regression residuals with inverse distance weighting (IDW), regularized splines with tension (SPT) or ordinary kriging (OK). Spatial uncertainty analysis is based on an independent subsample (test set), randomly selected in previous works. The main contribution of this work is the analysis of this test set as well as the search for an optimal process of division (split) of the stations in two sets, one used to perform the multiple regression and residuals interpolation (fit set), and another used to compute the quality (test set); optimal division should reduce spatial uncertainty and improve the overall quality. Two methods have been evaluated against classical methods: (random selection RS and leave-one-out cross-validation LOOCV): selection by Euclidian 2D-distance, and selection by anisotropic 2D-distance combined with a 3D-contribution (suitable weighted) from the most representative independent variable. Both methods define a minimum threshold distance, obtained by variogram analysis, between samples. Main preliminary results for LOOCV, RS (average from 10 executions), Euclidian criterion (EU), and for anisotropic criterion (with 1.1 value, UTMY coordinate has a bit more weight than UTMX) combined with 3D criteria (A3D) (1000 factor for elevation
Thermal hydraulic limits analysis using statistical propagation of parametric uncertainties
Chiang, K. Y.; Hu, L. W.; Forget, B.
2012-07-01
The MIT Research Reactor (MITR) is evaluating the conversion from highly enriched uranium (HEU) to low enrichment uranium (LEU) fuel. In addition to the fuel element re-design, a reactor power upgraded from 6 MW to 7 MW is proposed in order to maintain the same reactor performance of the HEU core. Previous approach in analyzing the impact of engineering uncertainties on thermal hydraulic limits via the use of engineering hot channel factors (EHCFs) was unable to explicitly quantify the uncertainty and confidence level in reactor parameters. The objective of this study is to develop a methodology for MITR thermal hydraulic limits analysis by statistically combining engineering uncertainties with an aim to eliminate unnecessary conservatism inherent in traditional analyses. This method was employed to analyze the Limiting Safety System Settings (LSSS) for the MITR, which is the avoidance of the onset of nucleate boiling (ONB). Key parameters, such as coolant channel tolerances and heat transfer coefficients, were considered as normal distributions using Oracle Crystal Ball to calculate ONB. The LSSS power is determined with 99.7% confidence level. The LSSS power calculated using this new methodology is 9.1 MW, based on core outlet coolant temperature of 60 deg. C, and primary coolant flow rate of 1800 gpm, compared to 8.3 MW obtained from the analytical method using the EHCFs with same operating conditions. The same methodology was also used to calculate the safety limit (SL) for the MITR, conservatively determined using onset of flow instability (OFI) as the criterion, to verify that adequate safety margin exists between LSSS and SL. The calculated SL is 10.6 MW, which is 1.5 MW higher than LSSS. (authors)
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Brown, J.; Goossens, L.H.J.; Kraan, B.C.P.
1997-06-01
This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Geoengineering to Avoid Overshoot: An Analysis of Uncertainty
NASA Astrophysics Data System (ADS)
Tanaka, Katsumasa; Cho, Cheolhung; Krey, Volker; Patt, Anthony; Rafaj, Peter; Rao-Skirbekk, Shilpa; Wagner, Fabian
2010-05-01
., 2009) is employed to calculate climate responses including associated uncertainty and to estimate geoengineering profiles to cap the warming at 2°C since preindustrial. The inversion setup for the model ACC2 is used to estimate the uncertain parameters (e.g. climate sensitivity) against associated historical observations (e.g. global-mean surface air temperature). Our preliminary results show that under climate and scenario uncertainties, a geoengineering intervention to avoid an overshoot would be with medium intensity in the latter half of this century (≈ 1 Mt. Pinatubo eruption every 4 years in terms of stratospheric sulfur injections). The start year of geoengineering intervention does not significantly influence the long-term geoengineering profile. However, a geoengineering intervention of the medium intensity could bring about substantial environmental side effects such as the destruction of stratospheric ozone. Our results point to the necessity to pursue persistently mainstream mitigation efforts. 2) Pollution Abatement and Geoengineering The second study examines the potential of geoengineering combined with air clean policy. A drastic air pollution abatement might result in an abrupt warming because it would suddenly remove the tropospheric aerosols which partly offset the background global warming (e.g. Andreae et al, 2005, Raddatz and Tanaka, 2010). This study investigates the magnitude of unrealized warming under a range of policy assumptions and associated uncertainties. Then the profile of geoengineering is estimated to suppress the warming that would be accompanied by clean air policy. This study is the first attempt to explore uncertainty in the warming caused by clean air policy - Kloster et al. (2009), which assess regional changes in climate and hydrological cycle, has not however included associated uncertainties in the analysis. A variety of policy assumptions will be devised to represent various degrees of air pollution abatement. These
Dynamic wake prediction and visualization with uncertainty analysis
NASA Technical Reports Server (NTRS)
Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)
2005-01-01
A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.
Selection of Representative Models for Decision Analysis Under Uncertainty
NASA Astrophysics Data System (ADS)
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Boardman, J.; Jones, J.A.; Harper, F.T.; Young, M.L.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models.
[Parameter uncertainty analysis for urban rainfall runoff modelling].
Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei
2012-07-01
An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh
NASA Astrophysics Data System (ADS)
Mortuza, M. R.; Demissie, Y.; Li, H. Y.
2014-12-01
Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.
Uncertainty Analysis of the Three Pagodas Fault-Source Geometry
NASA Astrophysics Data System (ADS)
Haller, K. M.
2015-12-01
Probabilistic seismic-hazard assessment generally relies on an earthquake catalog (to estimate future seismicity from the locations and rates of past earthquakes) and faults sources (to estimate future seismicity from the known paleoseismic history of surface rupture). The paleoseismic history of potentially active faults in Southeast Asia is addressed at few locations and spans only a few complete recurrence intervals; many faults remain unstudied. Even where the timing of a surface-rupturing earthquakes is known, the extent of rupture may not be well constrained. Therefore, subjective judgment of experts is often used to define the three-dimensional size of future ruptures; limited paleoseismic data can lead to large uncertainties in ground-motion hazard from fault sources due to the preferred models that underlie these judgments. The 300-km-long, strike-slip Three Pagodas fault in western Thailand is possibly one of the most active faults in the country. The fault parallels the plate boundary and may be characterized by a slip rate high enough to result in measurable ground-motion at periods of interest for building design. The known paleoseismic history is limited and likely does not include the largest possible earthquake on the fault. This lack of knowledge begs the question what sizes of earthquakes are expected? Preferred rupture models constrain possible magnitude-frequency distributions, and alternative rupture models can result in different ground-motion hazard near the fault. This analysis includes alternative rupture models for the Three Pagodas fault, a first-level check against gross modeling assumptions to assure the source model is a reasonable reflection of observed data, and resulting ground-motion hazard for each alternative. Inadequate paleoseismic data is an important source of uncertainty that could be compensated for by considering alternative rupture models for poorly known seismic sources.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.
Uncertainty analysis of fission fraction for reactor antineutrino experiments
NASA Astrophysics Data System (ADS)
Ma, X. B.; Lu, F.; Wang, L. Z.; Chen, Y. X.; Zhong, W. L.; An, F. P.
2016-06-01
Reactor simulation is an important source of uncertainties for a reactor neutrino experiment. Therefore, how to evaluate the antineutrino flux uncertainty results from reactor simulation is an important question. In this study, a method of the antineutrino flux uncertainty result from reactor simulation was proposed by considering the correlation coefficient. In order to use this method in the Daya Bay antineutrino experiment, the open source code DRAGON was improved and used for obtaining the fission fraction and correlation coefficient. The average fission fraction between DRAGON and SCIENCE code was compared and the difference was less than 5% for all the four isotopes. The uncertainty of fission fraction was evaluated by comparing simulation atomic density of four main isotopes with Takahama-3 experiment measurement. After that, the uncertainty of the antineutrino flux results from reactor simulation was evaluated as 0.6% per core for Daya Bay antineutrino experiment.
Experimental investigations for uncertainty quantification in brake squeal analysis
NASA Astrophysics Data System (ADS)
Renault, A.; Massa, F.; Lallemand, B.; Tison, T.
2016-04-01
The aim of this paper is to improve the correlation between the experimental and the numerical prediction of unstable frequencies for automotive brake systems considering uncertainty. First, an experimental quantification of uncertainty and a discussion analysing the contributions of uncertainty to a numerical squeal simulation are proposed. Frequency and transient simulations are performed considering nominal values of model parameters, determined experimentally. The obtained results are compared with those derived from experimental tests to highlight the limitation of deterministic simulations. The effects of the different kinds of uncertainty detected in working conditions of brake system, the pad boundary condition, the brake system material properties and the pad surface topography are discussed by defining different unstable mode classes. Finally, a correlation between experimental and numerical results considering uncertainty is successfully proposed for an industrial brake system. Results from the different comparisons reveal also a major influence of the pad topography and consequently the contact distribution.
Brown, J.; Goossens, L.H.J.; Kraan, B.C.P.
1997-06-01
This volume is the second of a two-volume document that summarizes a joint project by the US Nuclear Regulatory and the Commission of European Communities to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This two-volume report, which examines mechanisms and uncertainties of transfer through the food chain, is the first in a series of five such reports. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain transfer that affect calculations of offsite radiological consequences. Seven of the experts reported on transfer into the food chain through soil and plants, nine reported on transfer via food products from animals, and two reported on both. The expert judgment elicitation procedure and its outcomes are described in these volumes. This volume contains seven appendices. Appendix A presents a brief discussion of the MAACS and COSYMA model codes. Appendix B is the structure document and elicitation questionnaire for the expert panel on soils and plants. Appendix C presents the rationales and responses of each of the members of the soils and plants expert panel. Appendix D is the structure document and elicitation questionnaire for the expert panel on animal transfer. The rationales and responses of each of the experts on animal transfer are given in Appendix E. Brief biographies of the food chain expert panel members are provided in Appendix F. Aggregated results of expert responses are presented in graph format in Appendix G.
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
Myers, D. R.; Reda, I. M.; Wilcox, S. M.; Stoffel, T. L.
2004-04-01
The measurement of broadband solar radiation has grown in importance since the advent of solar renewable energy technologies in the 1970's, and the concern about the Earth's radiation balance related to climate change in the 1990's. In parallel, standardized methods of uncertainty analysis and reporting have been developed. Historical and updated uncertainties are based on the current international standardized uncertainty analysis method. Despite the fact that new and sometimes overlooked sources of uncertainty have been identified over the period 1988 to 2004, uncertainty in broadband solar radiometric instrumentation remains at 3% to 5% for pyranometers, and 2% to 3% for pyrheliometers. Improvements in characterizing correction functions for radiometer data may reduce total uncertainty. We analyze the theoretical standardized uncertainty sensitivity coefficients for the instrumentation calibration measurement equation and highlight the single parameter (thermal offset voltages), which contributes the most to the observed calibration responsivities.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
NASA Astrophysics Data System (ADS)
Cabellos, O.; Sanz, J.; Rodríguez, A.; González, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-01
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Probabilistic uncertainty analysis of an FRF of a structure using a Gaussian process emulator
NASA Astrophysics Data System (ADS)
Fricker, Thomas E.; Oakley, Jeremy E.; Sims, Neil D.; Worden, Keith
2011-11-01
This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.; Ratzlaff, Pete; Siemiginowska, Aneta E-mail: vkashyap@cfa.harvard.edu E-mail: rpete@head.cfa.harvard.edu
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.
CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis
Ferrer, R.; Rhodes, J.; Smith, K.
2012-07-01
The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)
Latin hypercube sampling as a tool in uncertainty analysis of computer models
McKay, M.D.
1992-09-01
This paper addresses several aspects of the analysis of uncertainty in the output of computer models arising from uncertainty in inputs (parameters). Uncertainty of this type, which is separate and distinct from the randomness of a stochastic model, most often arises when input values are guesstimates, or when they are estimated from data, or when the input parameters do not actually correspond to observable quantities, e.g., in lumped-parameter models. Uncertainty in the output is quantified in its probability distribution, which results from treating the inputs as random variables. The assessment of which inputs are important with respect to uncertainty is done relative to the probability distribution of the output.
Measurement uncertainty sources analysis for parasitic time grating sensors
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Zhou, Jiao; Fan, Bin; Fei, Yetai; Peng, Donglin; Wu, Tianfeng
2016-01-01
The signal quality of traveling wave and the measurement accuracy of parasitic time grating can be improved by optimiz ing its structure. This optimization process can be guided through building the electrical traveling wave equation with respect to the structure and the traveling wave signal generation principle. Based on Ansoft Maxwell simulation, the important electromagnetic parameters and the main uncertainty sources were analyzed and determined respectively. In the simulation parameters such as the excitation signal frequency, the gap width, the relative area of the probe, the coils number, the excitation signal amplitude and the core length were set to different values. It can be seen from the simulation results that excitation signal frequency, the gap width, the relative area between the probe and the rotor are the major factors to influence the angular measuring accuracy of parasitic time grating sensor. Meanwhile, the coils number, the excitation signal amplitude and the core length are the secondary factors. The analysis result can be utilized to optimize the structure of parasitic time grating and correct measurement error.
Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas
Bedford, Tim; Daneshkhah, Alireza
2015-01-01
Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240
Uncertainty Analysis of NASA Glenn's 8- by 6-Foot Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Stephens, Julia E.; Hubbard, Erin P.; Walter, Joel A.; McElroy, Tyler
2016-01-01
An analysis was performed to determine the measurement uncertainty of the Mach Number of the 8- by 6-foot Supersonic Wind Tunnel at the NASA Glenn Research Center. This paper details the analysis process used, including methods for handling limited data and complicated data correlations. Due to the complexity of the equations used, a Monte Carlo Method was utilized for this uncertainty analysis. A summary of the findings are presented as pertains to understanding what the uncertainties are, how they impact various research tests in the facility, and methods of reducing the uncertainties in the future.
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
Problem Solving Environment for Uncertainty Analysis and Design Exploration
2008-05-31
PSUADE is an software system that is used to study the relationships between the inputs and outputs of general simulation models for the purpose of performing uncertainty and sensitivity analyses on simulation models.
Problem Solving Environment for Uncertainty Analysis and Design Exploration
2010-05-28
PSUADE is an software system that is used to study the relationships between the inputs and outputs of general simulation models for the purpose of performing uncertainty and sensitivity analyses on simulation models.
Problem Solving Environment for Uncertainty Analysis and Design Exploration
2009-09-02
PSUDE is an software system that is used to study the relationships between the inputs and outputs of general simulation models for the purpose of performaing uncertainty and sensitivity analyses on simulation models.
Problem Solving Environment for Uncertainty Analysis and Design Exploration
2011-10-26
PSUADE is an software system that is used to study the releationships between the inputs and outputs of gerneral simulation models for the purpose of performing uncertainty and sensitivity analyses on simulation models.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model
Uncertainty Analysis of the Single-Vector Force Balance Calibration System
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Liu, Tianshu
2002-01-01
This paper presents an uncertainty analysis of the Single-Vector Force Balance Calibration System (SVS). This study is focused on the uncertainty involved in setting the independent variables during the calibration experiment. By knowing the uncertainty in the calibration system, the fundamental limits of the calibration accuracy of a particular balance can be determined. A brief description of the SVS mechanical system is provided. A mathematical model is developed to describe the mechanical system elements. A sensitivity analysis of these parameters is carried out through numerical simulations to assess the sensitivity of the total uncertainty to the elemental error sources. These sensitivity coefficients provide valuable information regarding the relative significance of the elemental sources of error. An example calculation of the total uncertainty for a specific balance is provided. Results from this uncertainty analysis are specific to the Single-Vector System, but the approach is broad in nature and therefore applicable to other measurement and calibration systems.
LCOE Uncertainty Analysis for Hydropower using Monte Carlo Simulations
O'Connor, Patrick W; Uria Martinez, Rocio; Kao, Shih-Chieh
2015-01-01
Levelized Cost of Energy (LCOE) is an important metric to evaluate the cost and performance of electricity production generation alternatives, and combined with other measures, can be used to assess the economics of future hydropower development. Multiple assumptions on input parameters are required to calculate the LCOE, which each contain some level of uncertainty, in turn affecting the accuracy of LCOE results. This paper explores these uncertainties, their sources, and ultimately the level of variability they introduce at the screening level of project evaluation for non-powered dams (NPDs) across the U.S. Owing to site-specific differences in site design, the LCOE for hydropower varies significantly from project to project unlike technologies with more standardized configurations such as wind and gas. Therefore, to assess the impact of LCOE input uncertainty on the economics of U.S. hydropower resources, these uncertainties must be modeled across the population of potential opportunities. To demonstrate the impact of uncertainty, resource data from a recent nationwide non-powered dam (NPD) resource assessment (Hadjerioua et al., 2012) and screening-level predictive cost equations (O Connor et al., 2015) are used to quantify and evaluate uncertainties in project capital and operations & maintenance costs, and generation potential at broad scale. LCOE dependence on financial assumptions is also evaluated on a sensitivity basis to explore ownership/investment implications on project economics for the U.S. hydropower fleet. The results indicate that the LCOE for U.S. NPDs varies substantially. The LCOE estimates for the potential NPD projects of capacity greater than 1 MW range from 40 to 182 $/MWh, with average of 106 $/MWh. 4,000 MW could be developed through projects with individual LCOE values below 100 $/MWh. The results also indicate that typically 90 % of LCOE uncertainty can be attributed to uncertainties in capital costs and energy production; however
Sensitivity and uncertainty analysis applied to the JHR reactivity prediction
Leray, O.; Vaglio-Gaudard, C.; Hudelot, J. P.; Santamarina, A.; Noguere, G.; Di-Salvo, J.
2012-07-01
The on-going AMMON program in EOLE reactor at CEA Cadarache (France) provides experimental results to qualify the HORUS-3D/N neutronics calculation scheme used for the design and safety studies of the new Material Testing Jules Horowitz Reactor (JHR). This paper presents the determination of technological and nuclear data uncertainties on the core reactivity and the propagation of the latter from the AMMON experiment to JHR. The technological uncertainty propagation was performed with a direct perturbation methodology using the 3D French stochastic code TRIPOLI4 and a statistical methodology using the 2D French deterministic code APOLLO2-MOC which leads to a value of 289 pcm (1{sigma}). The Nuclear Data uncertainty propagation relies on a sensitivity study on the main isotopes and the use of a retroactive marginalization method applied to the JEFF 3.1.1 {sup 27}Al evaluation in order to obtain a realistic multi-group covariance matrix associated with the considered evaluation. This nuclear data uncertainty propagation leads to a K{sub eff} uncertainty of 624 pcm for the JHR core and 684 pcm for the AMMON reference configuration core. Finally, transposition and reduction of the prior uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR (the representativity factor is 0.95). The final impact of JEFF 3.1.1 nuclear data on the Begin Of Life (BOL) JHR reactivity calculated by the HORUS-3D/N V4.0 is a bias of +216 pcm with an associated posterior uncertainty of 304 pcm (1{sigma}). (authors)
Uncertainty Analysis of RELAP5-3D
Alexandra E Gertman; Dr. George L Mesina
2012-07-01
As world-wide energy consumption continues to increase, so does the demand for the use of alternative energy sources, such as Nuclear Energy. Nuclear Power Plants currently supply over 370 gigawatts of electricity, and more than 60 new nuclear reactors have been commissioned by 15 different countries. The primary concern for Nuclear Power Plant operation and lisencing has been safety. The safety of the operation of Nuclear Power Plants is no simple matter- it involves the training of operators, design of the reactor, as well as equipment and design upgrades throughout the lifetime of the reactor, etc. To safely design, operate, and understand nuclear power plants, industry and government alike have relied upon the use of best-estimate simulation codes, which allow for an accurate model of any given plant to be created with well-defined margins of safety. The most widely used of these best-estimate simulation codes in the Nuclear Power industry is RELAP5-3D. Our project focused on improving the modeling capabilities of RELAP5-3D by developing uncertainty estimates for its calculations. This work involved analyzing high, medium, and low ranked phenomena from an INL PIRT on a small break Loss-Of-Coolant Accident as wall as an analysis of a large break Loss-Of- Coolant Accident. Statistical analyses were performed using correlation coefficients. To perform the studies, computer programs were written that modify a template RELAP5 input deck to produce one deck for each combination of key input parameters. Python scripting enabled the running of the generated input files with RELAP5-3D on INL’s massively parallel cluster system. Data from the studies was collected and analyzed with SAS. A summary of the results of our studies are presented.
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.'' Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true What kind of information should we include in a statement of uncertainty accompanying a calibrated value How and where do we get the information to include in an uncertainty statement How should we interpret and use measurement uncertainty information This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that ``The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.`` Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true? What kind of information should we include in a statement of uncertainty accompanying a calibrated value? How and where do we get the information to include in an uncertainty statement? How should we interpret and use measurement uncertainty information? This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Harper, F.T.; Young, M.L.; Miller, L.A.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the third of a three-volume document describing the project and contains descriptions of the probability assessment principles; the expert identification and selection process; the weighting methods used; the inverse modeling methods; case structures; and summaries of the consequence codes.
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Mattie, Patrick D.; Ghosh, S. Tina
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).
Uncertainty and Sensitivity Analysis in Performance Assessment for the Waste Isolation Pilot Plant
Helton, J.C.
1998-12-17
The Waste Isolation Pilot Plant (WIPP) is under development by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. This development has been supported by a sequence of performance assessments (PAs) carried out by Sandla National Laboratories (SNL) to assess what is known about the WIPP and to provide .tidance for future DOE research and development activities. Uncertainty and sensitivity analysis procedures based on Latin hypercube sampling and regression techniques play an important role in these PAs by providing an assessment of the uncertainty in important analysis outcomes and identi~ing the sources of thk uncertainty. Performance assessments for the WIPP are conceptually and computational] y interesting due to regulatory requirements to assess and display the effects of both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, where stochastic uncertainty arises from the possible disruptions that could occur over the 10,000 yr regulatory period associated with the WIPP and subjective uncertainty arises from an inability to unambi-aously characterize the many models and associated parameters required in a PA for the WIPP. The interplay between uncertainty analysis, sensitivity analysis, stochastic uncertainty and subjective uncertainty are discussed and illustrated in the context of a recent PA carried out by SNL to support an application by the DOE to the U.S. Environmental Protection Agency for the certification of the WIPP for the disposal of TRU waste.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive
We use Bayesian uncertainty analysis to explore how to estimate pollutant exposures from biomarker concentrations. The growing number of national databases with exposure data makes such an analysis possible. They contain datasets of pharmacokinetic biomarkers for many polluta...
Gregory, Kent J; Pattison, John E; Bibbo, Giovanni
2015-03-01
The minimal dose covering 90 % of the prostate volume--D 90--is arguably the most important dosimetric parameter in low-dose-rate prostate seed brachytherapy. In this study an analysis of the measurement uncertainties in D 90 from low-dose-rate prostate seed brachytherapy was conducted for two common treatment procedures with two different post-implant dosimetry methods. The analysis was undertaken in order to determine the magnitude of D 90 uncertainty, how the magnitude of the uncertainty varied when D 90 was calculated using different dosimetry methods, and which factors were the major contributors to the uncertainty. The analysis considered the prostate as being homogeneous and tissue equivalent and made use of published data, as well as original data collected specifically for this analysis, and was performed according to the Guide to the expression of uncertainty in measurement (GUM). It was found that when prostate imaging and seed implantation were conducted in two separate sessions using only CT images for post-implant analysis, the expanded uncertainty in D 90 values were about 25 % at the 95 % confidence interval. When prostate imaging and seed implantation were conducted during a single session using CT and ultrasound images for post-implant analysis, the expanded uncertainty in D 90 values were about 33 %. Methods for reducing these uncertainty levels are discussed. It was found that variations in contouring the target tissue made the largest contribution to D 90 uncertainty, while the uncertainty in seed source strength made only a small contribution. It is important that clinicians appreciate the overall magnitude of D 90 uncertainty and understand the factors that affect it so that clinical decisions are soundly based, and resources are appropriately allocated.
Population Uncertainty in Model Ecosystem: Analysis by Stochastic Differential Equation
NASA Astrophysics Data System (ADS)
Morita, Satoru; Tainaka, Kei-ichi; Nagata, Hiroyasu; Yoshimura, Jin
2008-09-01
Perturbation experiments are carried out by the numerical simulations of a contact process and its mean-field version. Here, the mortality rate increases or decreases suddenly. It is known that fluctuation enhancement (FE) occurs after perturbation, where FE indicates population uncertainty. In the present paper, we develop a new theory of stochastic differential equation. The agreement between the theory and the mean-field simulation is almost perfect. This theory enables us to find a much stronger FE than that reported previously. We discuss the population uncertainty in the recovering process of endangered species.
Assessment of water quality management with a systematic qualitative uncertainty analysis.
Chen, Chi-Feng; Ma, Hwong-wen; Reckhow, Kenneth H
2007-03-01
Uncertainty is an inevitable source of noise in water quality management and will weaken the adequacy of decisions. Uncertainty is derived from imperfect information, natural variability, and knowledge-based inconsistency. To make better decisions, it is necessary to reduce uncertainty. Conventional uncertainty analyses have focused on quantifying the uncertainty of parameters and variables in a probabilistic framework. However, the foundational properties and basic constraints might influence the entire system more than the quantifiable elements and have to be considered in initial analysis steps. According to binary classification, uncertainty includes quantitative uncertainty and non-quantitative uncertainty, which is also called qualitative uncertainty. Qualitative uncertainty originates from human subjective and biased beliefs. This study provides an understanding of qualitative uncertainty in terms of its conceptual definitions and practical applications. A systematic process of qualitative uncertainty analysis is developed for assisting complete uncertainty analysis, in which a qualitative network could then be built with qualitative relationship and quantifiable functions. In the proposed framework, a knowledge elicitation procedure is required to identify influential factors and their interrelationship. To limit biased information, a checklist is helpful to construct the qualitative network. The checklist helps one to ponder arbitrary assumptions that have often been taken for granted and may yield an incomplete or inappropriate decision analysis. The total maximum daily loads (TMDL) program is used as a surrogate for water quality management in this study. 15 uncertainty causes of TMDL programs are elicited by reviewing an influence diagram, and a checklist is formed with tabular interrogations corresponding to each uncertainty cause. The checklist enables decision makers to gain insight on the uncertainty level of the system at early steps as a
Worst case uncertainty estimates for routine instrumental analysis.
da Silva, Ricardo J N Bettencourt; Santos, Júlia R; Camões, M Filomena G F C
2002-07-01
A methodology for the worst case measurement uncertainty estimation for analytical methods which include an instrumental quantification step, adequate for routine determinations, is presented. Although the methodology presented should be based on a careful evaluation of the analytical method, the resulting daily calculations are very simple. The methodology is based on the estimation of the maximum value for the different sources of uncertainty and requires the definition of limiting values for certain analytical parameters. The simplification of the instrumental quantification uncertainty estimation involves the use of the standard deviation obtained from control charts relating to the concentrations estimated from the calibration curves for control standards at the highest calibration level. Three levels of simplification are suggested, as alternatives to the detailed approach, which can be selected according to the proximity of the sample results to decision limits. These approaches were applied to the determination of pesticide residues in apples (CEN, EN 12393), for which the most simplified approach showed a relative expanded uncertainty of 37.2% for a confidence level of approximately 95%.
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
Hoffman, F.O.; Hammonds, J.S.
1992-10-01
To compensate for the potential for overly conservative estimates of risk using standard US Environmental Protection Agency methods, an uncertainty analysis should be performed as an integral part of each risk assessment. Uncertainty analyses allow one to obtain quantitative results in the form of confidence intervals that will aid in decision making and will provide guidance for the acquisition of additional data. To perform an uncertainty analysis, one must frequently rely on subjective judgment in the absence of data to estimate the range and a probability distribution describing the extent of uncertainty about a true but unknown value for each parameter of interest. This information is formulated from professional judgment based on an extensive review of literature, analysis of the data, and interviews with experts. Various analytical and numerical techniques are available to allow statistical propagation of the uncertainty in the model parameters to a statement of uncertainty in the risk to a potentially exposed individual. Although analytical methods may be straightforward for relatively simple models, they rapidly become complicated for more involved risk assessments. Because of the tedious efforts required to mathematically derive analytical approaches to propagate uncertainty in complicated risk assessments, numerical methods such as Monte Carlo simulation should be employed. The primary objective of this report is to provide an introductory guide for performing uncertainty analysis in risk assessments being performed for Superfund sites.
An introductory guide to uncertainty analysis in environmental and health risk assessment
Hoffman, F.O.; Hammonds, J.S.
1992-10-01
To compensate for the potential for overly conservative estimates of risk using standard US Environmental Protection Agency methods, an uncertainty analysis should be performed as an integral part of each risk assessment. Uncertainty analyses allow one to obtain quantitative results in the form of confidence intervals that will aid in decision making and will provide guidance for the acquisition of additional data. To perform an uncertainty analysis, one must frequently rely on subjective judgment in the absence of data to estimate the range and a probability distribution describing the extent of uncertainty about a true but unknown value for each parameter of interest. This information is formulated from professional judgment based on an extensive review of literature, analysis of the data, and interviews with experts. Various analytical and numerical techniques are available to allow statistical propagation of the uncertainty in the model parameters to a statement of uncertainty in the risk to a potentially exposed individual. Although analytical methods may be straightforward for relatively simple models, they rapidly become complicated for more involved risk assessments. Because of the tedious efforts required to mathematically derive analytical approaches to propagate uncertainty in complicated risk assessments, numerical methods such as Monte Carlo simulation should be employed. The primary objective of this report is to provide an introductory guide for performing uncertainty analysis in risk assessments being performed for Superfund sites.
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...
Model parameter uncertainty analysis for an annual field-scale phosphorus loss model
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Measurement uncertainty analysis on laser tracker combined with articulated CMM
NASA Astrophysics Data System (ADS)
Zhao, Hui-ning; Yu, Lian-dong; Du, Yun; Zhang, Hai-yan
2013-10-01
The combined measurement technology plays an increasingly important role in the digitalized assembly. This paper introduces a combined measurement system consists of a Laser tracker and a FACMM,with the applications in the inspection of the position of the inner parts in a large-scale device. When these measurement instruments are combined, the resulting coordinate data set contains uncertainties that are a function of the base data sets and complex interactions between the measurement sets. Combined with the characteristics of Laser Tracker and Flexible Articulated Coordinate Measuring Machine (FACMM),Monte-Claro simulation mothed is employed in the uncertainty evaluation of combined measurement systems. A case study is given to demonstrate the practical applications of this research.
Real options analysis for photovoltaic project under climate uncertainty
NASA Astrophysics Data System (ADS)
Kim, Kyeongseok; Kim, Sejong; Kim, Hyoungkwan
2016-08-01
The decision on photovoltaic project depends on the level of climate environments. Changes in temperature and insolation affect photovoltaic output. It is important for investors to consider future climate conditions for determining investments on photovoltaic projects. We propose a real options-based framework to assess economic feasibility of photovoltaic project under climate change. The framework supports investors to evaluate climate change impact on photovoltaic projects under future climate uncertainty.
Zhang, Xuesong; Zhao, Kaiguang
2012-06-01
Bayesian Neural Networks (BNNs) have been shown as useful tools to analyze modeling uncertainty of Neural Networks (NNs). This research focuses on the comparison of two BNNs. The first BNNs (BNN-I) use statistical methods to describe the characteristics of different uncertainty sources (input, parameter, and model structure) and integrate these uncertainties into a Markov Chain Monte Carlo (MCMC) framework to estimate total uncertainty. The second BNNs (BNN-II) lump all uncertainties into a single error term (i.e. the residual between model prediction and measurement). In this study, we propose a simple BNN-II, which use Genetic Algorithms (GA) and Bayesian Model Averaging (BMA) to calibrate Neural Networks with different structures (number of hidden units) and combine the predictions from different NNs to derive predictions and uncertainty analysis. We tested these two BNNs in two watersheds for daily and monthly hydrologic simulation. The BMA based BNNs developed in this study outperforms BNN-I in the two watersheds in terms of both accurate prediction and uncertainty estimation. These results show that, given incomplete understanding of the characteristics associated with each uncertainty source, the simple lumped error approach may yield better prediction and uncertainty estimation.
Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong
2010-11-01
Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.
A GLUE uncertainty analysis of a drying model of pharmaceutical granules.
Mortier, Séverine Thérèse F C; Van Hoey, Stijn; Cierkens, Katrijn; Gernaey, Krist V; Seuntjens, Piet; De Baets, Bernard; De Beer, Thomas; Nopens, Ingmar
2013-11-01
A shift from batch processing towards continuous processing is of interest in the pharmaceutical industry. However, this transition requires detailed knowledge and process understanding of all consecutive unit operations in a continuous manufacturing line to design adequate control strategies. This can be facilitated by developing mechanistic models of the multi-phase systems in the process. Since modelling efforts only started recently in this field, uncertainties about the model predictions are generally neglected. However, model predictions have an inherent uncertainty (i.e. prediction uncertainty) originating from uncertainty in input data, model parameters, model structure, boundary conditions and software. In this paper, the model prediction uncertainty is evaluated for a model describing the continuous drying of single pharmaceutical wet granules in a six-segmented fluidized bed drying unit, which is part of the full continuous from-powder-to-tablet manufacturing line (Consigma™, GEA Pharma Systems). A validated model describing the drying behaviour of a single pharmaceutical granule in two consecutive phases is used. First of all, the effect of the assumptions at the particle level on the prediction uncertainty is assessed. Secondly, the paper focuses on the influence of the most sensitive parameters in the model. Finally, a combined analysis (particle level plus most sensitive parameters) is performed and discussed. To propagate the uncertainty originating from the parameter uncertainty to the model output, the Generalized Likelihood Uncertainty Estimation (GLUE) method is used. This method enables a modeller to incorporate the information obtained from the experimental data in the assessment of the uncertain model predictions and to find a balance between model performance and data precision. A detailed evaluation of the obtained uncertainty analysis results is made with respect to the model structure, interactions between parameters and uncertainty
Song, Junho; Nguyen, Tam H.; Paulino, Glaucio H.
2008-02-15
Probabilistic fracture analysis is performed for predicting uncertain fracture responses of Functionally Graded Material (FGM) structures. The uncertainties in material properties including Young's modulus and fracture toughness are considered. The limit state function for a crack initiation event is defined in terms of the J-integral for FGMs. The First-Order-Reliability-Method (FORM) is used in conjunction with a finite element code that computes the J-integral with high accuracy. A two-step probabilistic analysis procedure is proposed to investigate the effects of the uncertainties in the spatial distribution of Young's modulus on the probability of crack initiation in FGMs. First, we investigate the effects of the uncertainties in the shape of the spatial distribution by considering the slope and the location of the inflection point of a spatial distribution profile as random quantities. Second, we investigate the effects of the spatial fluctuations of Young's modulus by making use of a discretized random field. The companion paper (Part II) implements this method into a finite element fracture analysis code and presents numerical examples.
UNCERTAINTY ANALYSIS OF TCE USING THE DOSE EXPOSURE ESTIMATING MODEL (DEEM) IN ACSL
The ACSL-based Dose Exposure Estimating Model(DEEM) under development by EPA is used to perform art uncertainty analysis of a physiologically based pharmacokinetic (PSPK) model of trichloroethylene (TCE). This model involves several circulating metabolites such as trichloroacet...
Uncertainty analysis of diffuse-gray radiation enclosure problems: A hypersensitive case study
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio; Hodge, B. K.; Steele, W. Glenn
1993-01-01
An uncertainty analysis of diffuse-gray enclosure problems is presented. The genesis was a diffuse-gray enclosure problem which proved to be hypersensitive to the specification of view factors. This genesis is discussed in some detail. The uncertainty analysis is presented for the general diffuse-gray enclosure problem and applied to the hypersensitive case study. It was found that the hypersensitivity could be greatly reduced by enforcing both closure and reciprocity for the view factors. The effects of uncertainties in the surface emissivities and temperatures are also investigated.
Video Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis: Preprint
Lewandowski, A.; Gray, A.
2010-10-01
This purely analytical work is based primarily on the geometric optics of the system and shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough test. In this paper, we include both the random (precision) and systematic (bias) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors that we considered in this study are target tilt, target face to laser output distance, instrument vertical offset, scanner tilt, distance between the tool and the test piece, camera calibration, and scanner/calibration.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Analysis and Reduction of Complex Networks Under Uncertainty.
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC team consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.
A Framework to Support Generator Ramping Uncertainty Analysis and Visualization
2015-12-01
Power system operation requires maintaining a continuous balance between system demand and generation within certain constraints. Traditionally, the balancing processes are based on deterministic models, which do not consider possible random deviations of system generation and load from their predicted values. With the increasing penetration of the renewable generation, unexpected balancing problems can happen due to the deviations. This can result in serious risks to system reliability and efficiency. When the available balancing reserve is not enough to cover the predicted net load range with uncertainty, deficiency of balancing needs occurs. In this case, it is necessary to commit or de-commit additional conventional generators to achieve the desired confidence level for the balancing needs. The framework is built for solving this problem. The ramping tool engine is used to predict additional balancing requirements caused by the variability and uncertainty of the renewable energy, under the constraints of the generation ramping capability and interchange schedule. The webbrowser- based GUI is used to visualize the data in web-environment, which provides flexibility to allow user to see the ramping outputs in any platform. The GOSS structure provides strong support to allow easy communication between ramping engine, and system inputs, as well as the GUI.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.
2012-01-01
There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.
NASA Astrophysics Data System (ADS)
Ralphs, Matthew I.; Smith, Barton L.; Roberts, Nicholas A.
2016-11-01
High thermal conductivity thermal interface materials (TIMs) are needed to extend the life and performance of electronic circuits. A stepped bar apparatus system has been shown to work well for thermal resistance measurements with rigid materials, but most TIMs are elastic. This work studies the uncertainty of using a stepped bar apparatus to measure the thermal resistance and a tensile/compression testing machine to estimate the compressed thickness of polydimethylsiloxane for a measurement on the thermal conductivity, k eff. An a priori, zeroth order analysis is used to estimate the random uncertainty from the instrumentation; a first order analysis is used to estimate the statistical variation in samples; and an a posteriori, Nth order analysis is used to provide an overall uncertainty on k eff for this measurement method. Bias uncertainty in the thermocouples is found to be the largest single source of uncertainty. The a posteriori uncertainty of the proposed method is 6.5% relative uncertainty (68% confidence), but could be reduced through calibration and correlated biases in the temperature measurements.
NASA Astrophysics Data System (ADS)
Ramírez, Andrea; de Keizer, Corry; Van der Sluijs, Jeroen P.; Olivier, Jos; Brandes, Laurens
This paper presents an assessment of the value added of a Monte Carlo analysis of the uncertainties in the Netherlands inventory of greenhouse gases over a Tier 1 analysis. It also examines which parameters contributed the most to the total emission uncertainty and identified areas of high priority for the further improvement of the accuracy and quality of the inventory. The Monte Carlo analysis resulted in an uncertainty range in total GHG emissions of 4.1% in 2004 and 5.4% in 1990 (with LUCF) and 5.3% (in 1990) and 3.9% (in 2004) for GHG emissions without LUCF. Uncertainty in the trend was estimated at 4.5%. The values are in the same order of magnitude as those estimated in the Tier 1. The results show that accounting for correlation among parameters is important, and for the Netherlands inventory it has a larger impact on the uncertainty in the trend than on the uncertainty in the total GHG emissions. The main contributors to overall uncertainty are found to be related to N 2O emissions from agricultural soils, the N 2O implied emission factors of Nitric Acid Production, CH 4 from managed solid waste disposal on land, and the implied emission factor of CH 4 from manure management from cattle.
Uncertainty analysis for probabilistic pipe fracture evaluations in LBB applications
Rahman, S.; Ghadiali, N.; Wilkowski, G.
1997-04-01
During the NRC`s Short Cracks in Piping and Piping Welds Program at Battelle, a probabilistic methodology was developed to conduct fracture evaluations of circumferentially cracked pipes for application to leak-rate detection. Later, in the IPIRG-2 program, several parameters that may affect leak-before-break and other pipe flaw evaluations were identified. This paper presents new results from several uncertainty analyses to evaluate the effects of normal operating stresses, normal plus safe-shutdown earthquake stresses, off-centered cracks, restraint of pressure-induced bending, and dynamic and cyclic loading rates on the conditional failure probability of pipes. systems in BWR and PWR. For each parameter, the sensitivity to conditional probability of failure and hence, its importance on probabilistic leak-before-break evaluations were determined.
Status of XSUSA for Sampling Based Nuclear Data Uncertainty and Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Zwermann, W.; Gallner, L.; Klein, M.; Krzykacz-Hausmann, B.; Pasichnyk, I.; Pautz, A.; Velkov, K.
2013-03-01
In the present contribution, an overview of the sampling based XSUSA method for sensitivity and uncertainty analysis with respect to nuclear data is given. The focus is on recent developments and applications of XSUSA. These applications include calculations for critical assemblies, fuel assembly depletion calculations, and steadystate as well as transient reactor core calculations. The analyses are partially performed in the framework of international benchmark working groups (UACSA - Uncertainty Analyses for Criticality Safety Assessment, UAM - Uncertainty Analysis in Modelling). It is demonstrated that particularly for full-scale reactor calculations the influence of the nuclear data uncertainties on the results can be substantial. For instance, for the radial fission rate distributions of mixed UO2/MOX light water reactor cores, the 2σ uncertainties in the core centre and periphery can reach values exceeding 10%. For a fast transient, the resulting time behaviour of the reactor power was covered by a wide uncertainty band. Overall, the results confirm the necessity of adding systematic uncertainty analyses to best-estimate reactor calculations.
Strydom, Gerhard; Bostelmann, Friederike; Ivanov, Kostadin
2014-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. One way to address the uncertainties in the HTGR analysis tools is to assess the sensitivity of critical parameters (such as the calculated maximum fuel temperature during loss of coolant accidents) to a few important input uncertainties. The input parameters were identified by engineering judgement in the past but are today typically based on a Phenomena Identification Ranking Table (PIRT) process. The input parameters can also be derived from sensitivity studies and are then varied in the analysis to find a spread in the parameter of importance. However, there is often no easy way to compensate for these uncertainties. In engineering system design, a common approach for addressing performance uncertainties is to add compensating margins to the system, but with passive properties credited it is not so clear how to apply it in the case of modular HTGR heat removal path. Other more sophisticated uncertainty modelling approaches, including Monte Carlo analysis, have also been proposed and applied. Ideally one wishes to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies, and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Therefore some safety analysis calculations may use a mixture of these approaches for different parameters depending upon the particular requirements of the analysis problem involved. Sensitivity analysis can for example be used to provide information as part of an uncertainty analysis to determine best estimate plus uncertainty results to the
Factors Affecting Pollutant Load Reduction with Uncertainty Analysis in Urban Stormwater BMP Systems
NASA Astrophysics Data System (ADS)
Park, D.
2015-12-01
This study incorporates uncertainty analysis into a model of the performance of stormwater best management practices (BMPs) to characterize the uncertainty in stormwater BMP effluent load that results from uncertainty in the BMP performance modeling in an urban stormwater system. Detention basins are used as BMPs in the urban stormwater systems, and the total suspended solids (TSS) are used as an urban nonpoint source pollutant in Los Angeles, CA. The k-C* model, which incorporates uncertainty analysis, is applied to the uncertainty of the stormwater effluent concentration in urban stormwater systems. This study presents a frequency analysis of the runoff volume and BMP overflows to characterize the uncertainty of BMP effluent loads, and the load frequency curve (LFC) is simulated with and without BMP conditions and verified using the observed TSS load. Finally, the effects of imperviousness, BMP volume, and BMP surface area are investigated using a reliability analysis. The results of this study can be used to determine the appropriate BMP size to achieve a specific watershed runoff pollutant load. The result of this evaluation method can support the adequate sizing of a BMP to meet the defined nonpoint source pollutant regulations. Acknowlegments This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Uncertainty Analysis Framework - Hanford Site-Wide Groundwater Flow and Transport Model
Cole, Charles R.; Bergeron, Marcel P.; Murray, Christopher J.; Thorne, Paul D.; Wurstner, Signe K.; Rogers, Phillip M.
2001-11-09
Pacific Northwest National Laboratory (PNNL) embarked on a new initiative to strengthen the technical defensibility of the predictions being made with a site-wide groundwater flow and transport model at the U.S. Department of Energy Hanford Site in southeastern Washington State. In FY 2000, the focus of the initiative was on the characterization of major uncertainties in the current conceptual model that would affect model predictions. The long-term goals of the initiative are the development and implementation of an uncertainty estimation methodology in future assessments and analyses using the site-wide model. This report focuses on the development and implementation of an uncertainty analysis framework.
NASA Astrophysics Data System (ADS)
Zobitz, J. M.; Keener, J. P.; Bowling, D. R.
2004-12-01
Quantifying and understanding the uncertainty in isotopic mixing relationships is critical to isotopic applications in carbon cycle studies at all spatial and temporal scales. Studies associated with the North American Carbon Program will depend on stable isotope approaches and quantification of isotopic uncertainty. An important application of isotopic mixing relationships is determination of the isotopic content of large-scale respiration (δ 13CR) via an inverse relationship (a Keeling plot) between atmospheric CO2 concentrations ([CO2]) and carbon isotope ratios of CO2 (δ 13C). Alternatively, a linear relationship between [CO2] and the product of [CO2] and δ 13C (a Miller/Tans plot) can also be applied. We used an extensive dataset from the Niwot Ridge Ameriflux Site of [CO2] and δ 13C in forest air to examine contrasting approaches to determine δ 13CR and its uncertainty. These included Keeling plots, Miller/Tans plots, Model I, and Model II regressions Our analysis confirms previous observations that increasing the range of measurements ([CO2] range) reduces the uncertainty associated with δ 13CR. For carbon isotope studies, uncertainty in the isotopic measurements has a greater effect on the uncertainty of δ 13CR than the uncertainty in [CO2]. Reducing the uncertainty of isotopic measurements reduces the uncertainty of δ 13CR even when the [CO2] range of samples is small (< 20 ppm). As a result, improvement in isotope (rather than CO2) measuring capability is needed to substantially reduce uncertainty in δ 13CR. We also find for carbon isotope studies no inherent advantage to using either a Keeling or a Miller/Tans approach to determine δ 13CR.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
NASA Astrophysics Data System (ADS)
Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.
2013-12-01
Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.
Gilbert, Jennifer A; Meyers, Lauren Ancel; Galvani, Alison P; Townsend, Jeffrey P
2014-03-01
Mathematical modeling of disease transmission has provided quantitative predictions for health policy, facilitating the evaluation of epidemiological outcomes and the cost-effectiveness of interventions. However, typical sensitivity analyses of deterministic dynamic infectious disease models focus on model architecture and the relative importance of parameters but neglect parameter uncertainty when reporting model predictions. Consequently, model results that identify point estimates of intervention levels necessary to terminate transmission yield limited insight into the probability of success. We apply probabilistic uncertainty analysis to a dynamic model of influenza transmission and assess global uncertainty in outcome. We illustrate that when parameter uncertainty is not incorporated into outcome estimates, levels of vaccination and treatment predicted to prevent an influenza epidemic will only have an approximately 50% chance of terminating transmission and that sensitivity analysis alone is not sufficient to obtain this information. We demonstrate that accounting for parameter uncertainty yields probabilities of epidemiological outcomes based on the degree to which data support the range of model predictions. Unlike typical sensitivity analyses of dynamic models that only address variation in parameters, the probabilistic uncertainty analysis described here enables modelers to convey the robustness of their predictions to policy makers, extending the power of epidemiological modeling to improve public health. PMID:24593920
Numerical modeling and uncertainty analysis of light emitting diodes for photometric measurements
NASA Astrophysics Data System (ADS)
Khan, Mohammed Z. U.; Abbas, Mohammed; Al-Hadhrami, Luai M.
2015-06-01
With the rapid evolution of new, energy-efficient solid-state lighting (SSL) systems, a requirement has risen for new performance metrics and measurement methods to address their unique construction and operating conditions. In this paper, light propagation characteristics in light emitting diodes are analyzed for measurement uncertainty through numerical modeling and simulation. A general 2D EM simulator with PML boundary conditions is formulated to solve Maxwell's equations using finite-difference time domain (FDTD) numerical method to describe the light propagation in LEDs. A practical GaN LED used in SSL systems is simulated for light propagation. The optical properties of dispersive materials are modeled using multi-pole Lorentz-Drude model. The input dipole source for the LED structure is modeled explicitly through a Gaussian pulse line source at a central wavelength of 460 nm corresponding to GaN emission. Finally, the expression for combined standard uncertainty in the light extraction efficiency due to uncertainties in inputs such as emission in the active layer and EM fields is developed using the GUM law of propagation of uncertainties. The uncertainty in GaN LED emission wavelength obtained from Full Width Half Maximum (FWHM) of the emission spectrum is computed to be 16.98 nm. Therefore, the uncertainty analysis model is then used to compute the corresponding uncertainties in the LED output measurements i.e. light extraction efficiency, LED output power and EM fields.
Unscented transform-based uncertainty analysis of rotating coil transducers for field mapping
NASA Astrophysics Data System (ADS)
Arpaia, P.; De Matteis, E.; Schiano Lo Moriello, R.
2016-03-01
The uncertainty of a rotating coil transducer for magnetic field mapping is analyzed. Unscented transform and statistical design of experiments are combined to determine magnetic field expectation, standard uncertainty, and separate contributions of the uncertainty sources. For nonlinear measurement models, the unscented transform-based approach is more error-proof than the linearization underlying the "Guide to the expression of Uncertainty in Measurements" (GUMs), owing to the absence of model approximations and derivatives computation. When GUM assumptions are not met, the deterministic sampling strategy strongly reduces computational burden with respect to Monte Carlo-based methods proposed by the Supplement 1 of the GUM. Furthermore, the design of experiments and the associated statistical analysis allow the uncertainty sources domain to be explored efficiently, as well as their significance and single contributions to be assessed for an effective setup configuration. A straightforward experimental case study highlights that a one-order-of-magnitude reduction in the relative uncertainty of the coil area produces a decrease in uncertainty of the field mapping transducer by a factor of 25 with respect to the worst condition. Moreover, about 700 trials and the related processing achieve results corresponding to 5 × 106 brute-force Monte Carlo simulations.
Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Zhang, Guannan
2014-01-01
A multi-rate expression for uranyl [U(VI)] surface complexation reactions has been proposed to describe diffusion-limited U(VI) sorption/desorption in heterogeneous subsurface sediments. An important assumption in the rate expression is that its rate constants follow a certain type probability distribution. In this paper, a Bayes-based, Differential Evolution Markov Chain method was used to assess the distribution assumption and to analyze parameter and model structure uncertainties. U(VI) desorption from a contaminated sediment at the US Hanford 300 Area, Washington was used as an example for detail analysis. The results indicated that: 1) the rate constants in the multi-rate expression contain uneven uncertainties with slower rate constants having relative larger uncertainties; 2) the lognormal distribution is an effective assumption for the rate constants in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the smaller rate constants; and 4) both parameter and model structure uncertainties can affect the extrapolation of the multi-rate model with a larger uncertainty from the model structure. The results provide important insights into the factors contributing to the uncertainties of the multi-rate expression commonly used to describe the diffusion or mixing-limited sorption/desorption of both organic and inorganic contaminants in subsurface sediments.
Unscented transform-based uncertainty analysis of rotating coil transducers for field mapping.
Arpaia, P; De Matteis, E; Schiano Lo Moriello, R
2016-03-01
The uncertainty of a rotating coil transducer for magnetic field mapping is analyzed. Unscented transform and statistical design of experiments are combined to determine magnetic field expectation, standard uncertainty, and separate contributions of the uncertainty sources. For nonlinear measurement models, the unscented transform-based approach is more error-proof than the linearization underlying the "Guide to the expression of Uncertainty in Measurements" (GUMs), owing to the absence of model approximations and derivatives computation. When GUM assumptions are not met, the deterministic sampling strategy strongly reduces computational burden with respect to Monte Carlo-based methods proposed by the Supplement 1 of the GUM. Furthermore, the design of experiments and the associated statistical analysis allow the uncertainty sources domain to be explored efficiently, as well as their significance and single contributions to be assessed for an effective setup configuration. A straightforward experimental case study highlights that a one-order-of-magnitude reduction in the relative uncertainty of the coil area produces a decrease in uncertainty of the field mapping transducer by a factor of 25 with respect to the worst condition. Moreover, about 700 trials and the related processing achieve results corresponding to 5 × 10(6) brute-force Monte Carlo simulations.
Pawel, David; Leggett, Richard Wayne; Eckerman, Keith F; Nelson, Christopher
2007-01-01
Federal Guidance Report No. 13 (FGR 13) provides risk coefficients for estimation of the risk of cancer due to low-level exposure to each of more than 800 radionuclides. Uncertainties in risk coefficients were quantified in FGR 13 for 33 cases (exposure to each of 11 radionuclides by each of three exposure pathways) on the basis of sensitivity analyses in which various combinations of plausible biokinetic, dosimetric, and radiation risk models were used to generate alternative risk coefficients. The present report updates the uncertainty analysis in FGR 13 for the cases of inhalation and ingestion of radionuclides and expands the analysis to all radionuclides addressed in that report. The analysis indicates that most risk coefficients for inhalation or ingestion of radionuclides are determined within a factor of 5 or less by current information. That is, application of alternate plausible biokinetic and dosimetric models and radiation risk models (based on the linear, no-threshold hypothesis with an adjustment for the dose and dose rate effectiveness factor) is unlikely to change these coefficients by more than a factor of 5. In this analysis the assessed uncertainty in the radiation risk model was found to be the main determinant of the uncertainty category for most risk coefficients, but conclusions concerning the relative contributions of risk and dose models to the total uncertainty in a risk coefficient may depend strongly on the method of assessing uncertainties in the risk model.
IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases
Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov
2012-11-01
Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.
Mean-value second-order uncertainty analysis method: application to water quality modelling
NASA Astrophysics Data System (ADS)
Mailhot, Alain; Villeneuve, Jean-Pierre
Uncertainty analysis in hydrology and water quality modelling is an important issue. Various methods have been proposed to estimate uncertainties on model results based on given uncertainties on model parameters. Among these methods, the mean-value first-order second-moment (MFOSM) method and the advanced mean-value first-order second-moment (AFOSM) method are the most common ones. This paper presents a method based on a second-order approximation of a model output function. The application of this method requires the estimation of first- and second-order derivatives at a mean-value point in the parameter space. Application to a Streeter-Phelps prototype model is presented. Uncertainties on two and six parameters are considered. Exceedance probabilities (EP) of dissolved oxygen concentrations are obtained and compared with EP computed using Monte Carlo, AFOSM and MFOSM methods. These results show that the mean-value second-order method leads to better estimates of EP.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
NASA Astrophysics Data System (ADS)
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
NASA Technical Reports Server (NTRS)
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation
Uncertainty Analysis of A Global Hydrologic Model Used For Climate Impact Assessment
NASA Astrophysics Data System (ADS)
Kaspar, F.; Doell, P.
WaterGAP 2 is an integrated global model of water use and water availability. The hydrologic submodel (WGHM) computes surface runoff, groundwater recharge and river discharge at a spatial resolution of 0.5 degree. It is calibrated against observed discharge at 724 gauging stations which represent about 50% of the global land area. The model has been used in a number of studies to assess the effects of a chang- ing climate on water resources. The results of global climate models have been used as input for these scenario calculations. Due to the calibration the hydrologic model performs quite well for simulations of historic periods. An uncertainty analysis has been performed to evaluate the reliability of the scenario calculations. Three sources of uncertainties have been considered: (1) the uncertainty of the internal model pa- rameters, (2) the uncertainty caused by the use of different climate models and (3) the uncertainty due to the model structure. For the first category 38 model parameters have been considered with their appropriate probability distributions derived from lit- erature research. Latin Hypercube sampling has been used to evaluate the effect on the model output. Two climate models (ECHAM4 and HadCM3) have been used to demonstrate the effects of category 2. The third type of uncertainties was evaluated by comparing two approaches for the evapotranspiration equation (Priestley-Taylor vs. Penman-Monteith. The results are presented in a way that allows a direct comparison of the different sources of uncertainties. The analysis has been performed in different water basins all over the world. This reveals significant regional differences in the im- portance of the different sources of uncertainties. A comparison of basins with their subbasins demonstrates the relevance of the spatial scale.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
NASA Astrophysics Data System (ADS)
James, S. C.; Makino, H.
2004-12-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS site-scale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL. {\\st Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.}
James, Scott Carlton
2004-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS sitescale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL.
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I×J×K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 2(7-4) Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I×J×K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 2(7-4) Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use. PMID:27404670
Uncertainty Analysis of LROC NAC Derived Elevation Models
NASA Astrophysics Data System (ADS)
Burns, K.; Yates, D. G.; Speyerer, E.; Robinson, M. S.
2012-12-01
One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) [1] is to gather stereo observations with the Narrow Angle Camera (NAC) to generate digital elevation models (DEMs). From an altitude of 50 km, the NAC acquires images with a pixel scale of 0.5 meters, and a dual NAC observation covers approximately 5 km cross-track by 25 km down-track. This low altitude was common from September 2009 to December 2011. Images acquired during the commissioning phase and those acquired from the fixed orbit (after 11 December 2011) have pixel scales that range from 0.35 meters at the south pole to 2 meters at the north pole. Alimetric observations obtained by the Lunar Orbiter Laser Altimeter (LOLA) provide measurements of ±0.1 m between the spacecraft and the surface [2]. However, uncertainties in the spacecraft positioning can result in offsets (±20m) between altimeter tracks over many orbits. The LROC team is currently developing a tool to automatically register alimetric observations to NAC DEMs [3]. Using a generalized pattern search (GPS) algorithm, the new automatic registration adjusts the spacecraft position and pointing information during times when NAC images, as well as LOLA measurements, of the same region are acquired to provide an absolute reference frame for the DEM. This information is then imported into SOCET SET to aide in creating controlled NAC DEMs. For every DEM, a figure of merit (FOM) map is generated using SOCET SET software. This is a valuable tool for determining the relative accuracy of a specific pixel in a DEM. Each pixel in a FOM map is given a value to determine its "quality" by determining if the specific pixel was shadowed, saturated, suspicious, interpolated/extrapolated, or successfully correlated. The overall quality of a NAC DEM is a function of both the absolute and relative accuracies. LOLA altimetry provides the most accurate absolute geodetic reference frame with which the NAC DEMs can be compared. Offsets
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
Detailed Uncertainty Analysis for Ares I Ascent Aerodynamics Wind Tunnel Database
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Hanke, Jeremy L.; Walker, Eric L.; Houlden, Heather P.
2008-01-01
A detailed uncertainty analysis for the Ares I ascent aero 6-DOF wind tunnel database is described. While the database itself is determined using only the test results for the latest configuration, the data used for the uncertainty analysis comes from four tests on two different configurations at the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. Four major error sources are considered: (1) systematic errors from the balance calibration curve fits and model + balance installation, (2) run-to-run repeatability, (3) boundary-layer transition fixing, and (4) tunnel-to-tunnel reproducibility.
Uncertainty analysis of signal deconvolution using a measured instrument response function
NASA Astrophysics Data System (ADS)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.; Cerjan, C.; Eckart, M. J.; Grim, G. P.; Hatarik, R.; Moore, A. S.; Munro, D. H.; Phillips, T.; Sayre, D. B.
2016-11-01
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). In the case investigated here, the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to determine the uncertainty estimate of the physical model's parameters. We apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimum physical parameters.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a
Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Ely, Jeffry W.
2012-01-01
A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Uncertainty Analysis for a Virtual Flow Meter Using an Air-Handling Unit Chilled Water Valve
Song, Li; Wang, Gang; Brambley, Michael R.
2013-04-28
A virtual water flow meter is developed that uses the chilled water control valve on an air-handling unit as a measurement device. The flow rate of water through the valve is calculated using the differential pressure across the valve and its associated coil, the valve command, and an empirically determined valve characteristic curve. Thus, the probability of error in the measurements is significantly greater than for conventionally manufactured flow meters. In this paper, mathematical models are developed and used to conduct uncertainty analysis for the virtual flow meter, and the results from the virtual meter are compared to measurements made with an ultrasonic flow meter. Theoretical uncertainty analysis shows that the total uncertainty in flow rates from the virtual flow meter is 1.46% with 95% confidence; comparison of virtual flow meter results with measurements from an ultrasonic flow meter yielded anuncertainty of 1.46% with 99% confidence. The comparable results from the theoretical uncertainty analysis and empirical comparison with the ultrasonic flow meter corroborate each other, and tend to validate the approach to computationally estimating uncertainty for virtual sensors introduced in this study.
Uncertainty analysis in environmental radioactivity measurements using the Monte Carlo code MCNP5
NASA Astrophysics Data System (ADS)
Gallardo, S.; Querol, A.; Ortiz, J.; Ródenas, J.; Verdú, G.; Villanueva, J. F.
2015-11-01
High Purity Germanium (HPGe) detectors are widely used for environmental radioactivity measurements due to their excellent energy resolution. Monte Carlo (MC) codes are a useful tool to complement experimental measurements in calibration procedures at the laboratory. However, the efficiency curve of the detector can vary due to uncertainties associated with measurements. These uncertainties can be classified into some categories: geometrical parameters of the measurement (distance source-detector, volume of the source), properties of the radiation source (radionuclide activity, branching ratio), and detector characteristics (Ge dead layer, active volume, end cap thickness). The Monte Carlo simulation can be also affected by other kind of uncertainties mainly related to cross sections and to the calculation itself. Normally, all these uncertainties are not well known and it required a deep analysis to determine their effect on the detector efficiency. In this work, the Noether-Wilks formula is used to carry out the uncertainty analysis. A Probability Density Function (PDF) is assigned to each variable involved in the sampling process. The size of the sampling is determined from the characteristics of the tolerance intervals by applying the Noether-Wilks formula. Results of the analysis transform the efficiency curve into a region of possible values into the tolerance intervals. Results show a good agreement between experimental measurements and simulations for two different matrices (water and sand).
A new algorithm for importance analysis of the inputs with distribution parameter uncertainty
NASA Astrophysics Data System (ADS)
Li, Luyi; Lu, Zhenzhou
2016-10-01
Importance analysis is aimed at finding the contributions by the inputs to the uncertainty in a model output. For structural systems involving inputs with distribution parameter uncertainty, the contributions by the inputs to the output uncertainty are governed by both the variability and parameter uncertainty in their probability distributions. A natural and consistent way to arrive at importance analysis results in such cases would be a three-loop nested Monte Carlo (MC) sampling strategy, in which the parameters are sampled in the outer loop and the inputs are sampled in the inner nested double-loop. However, the computational effort of this procedure is often prohibitive for engineering problem. This paper, therefore, proposes a newly efficient algorithm for importance analysis of the inputs in the presence of parameter uncertainty. By introducing a 'surrogate sampling probability density function (SS-PDF)' and incorporating the single-loop MC theory into the computation, the proposed algorithm can reduce the original three-loop nested MC computation into a single-loop one in terms of model evaluation, which requires substantially less computational effort. Methods for choosing proper SS-PDF are also discussed in the paper. The efficiency and robustness of the proposed algorithm have been demonstrated by results of several examples.
Strydom, Gerhard; Bostelmann, F.
2015-09-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on
Implementation of a Bayesian Engine for Uncertainty Analysis
Leng Vang; Curtis Smith; Steven Prescott
2014-08-01
In probabilistic risk assessment, it is important to have an environment where analysts have access to a shared and secured high performance computing and a statistical analysis tool package. As part of the advanced small modular reactor probabilistic risk analysis framework implementation, we have identified the need for advanced Bayesian computations. However, in order to make this technology available to non-specialists, there is also a need of a simplified tool that allows users to author models and evaluate them within this framework. As a proof-of-concept, we have implemented an advanced open source Bayesian inference tool, OpenBUGS, within the browser-based cloud risk analysis framework that is under development at the Idaho National Laboratory. This development, the “OpenBUGS Scripter” has been implemented as a client side, visual web-based and integrated development environment for creating OpenBUGS language scripts. It depends on the shared server environment to execute the generated scripts and to transmit results back to the user. The visual models are in the form of linked diagrams, from which we automatically create the applicable OpenBUGS script that matches the diagram. These diagrams can be saved locally or stored on the server environment to be shared with other users.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Harp, Dylan R.; Atchley, Adam L.; Painter, Scott L.; Coon, Ethan T.; Wilson, Cathy J.; Romanovsky, Vladimir E.; Rowland, Joel C.
2016-02-11
Here, the effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21more » $$^{st}$$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows formore » the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
Aydogan, B.; Miller, L.F.; Sparks, R.B.; Stubbs, J.B.
1999-01-01
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation was considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is
NASA Astrophysics Data System (ADS)
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-01
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is
NASA Astrophysics Data System (ADS)
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2016-02-01
The effects of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The null-space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of predictive uncertainty (due to soil property (parametric) uncertainty) and the inter-annual climate variability due to year to year differences in CESM climate forcings. After calibrating to measured borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant predictive uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Inter-annual climate variability in projected soil moisture content and Stefan number are small. A volume- and time-integrated Stefan number decreases significantly, indicating a shift in subsurface energy utilization in the future climate (latent heat of phase change becomes more important than heat conduction). Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we quantify the relative magnitude of soil
Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh
2010-10-01
Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information
NASA Technical Reports Server (NTRS)
Groves, Curtis E.
2013-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around
Interval arithmetic operations for uncertainty analysis with correlated interval variables
NASA Astrophysics Data System (ADS)
Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu
2016-08-01
A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.
A Study of the Critical Uncertainty Contributions in the Analysis of PCBs in Ambient Air
Brown, Andrew S.; Brown, Richard J. C.
2008-01-01
The measurement of polychlorinated biphenyls (PCBs) in ambient air requires a complex, multistep sample preparation procedure prior to analysis by gas chromatography—mass spectrometry (GC-MS). Although routine analytical laboratories regularly carry out these measurements, they are often undertaken with little regard to the accurate calculation of measurement uncertainty, or appreciation of the sensitivity of the accuracy of the measurement to each step of the analysis. A measurement equation is developed for this analysis, and the contributory sources to the overall uncertainty when preparing calibration standards and other solutions by gravimetric and volumetric approaches are discussed and compared. For the example analysis presented, it is found that the uncertainty of the measurement is dominated by the repeatability of the GC-MS analysis and suggested that volumetric (as opposed to gravimetric) preparation of solutions does not adversely affect the overall uncertainty. The methodology presented in this work can also be applied to analogous methods for similar analytes, for example, those used to measure polycyclic aromatic hydrocarbons (PAHs), pesticides, dioxins, or furans in ambient air. PMID:18528517
NASA Astrophysics Data System (ADS)
Djepa, Vera; Badii, Atta
2016-04-01
The sensitivity of weather and climate system to sea ice thickness (SIT), Sea Ice Draft (SID) and Snow Depth (SD) in the Arctic is recognized from various studies. Decrease in SIT will affect atmospheric circulation, temperature, precipitation and wind speed in the Arctic and beyond. Ice thermodynamics and dynamic properties depend strongly on sea Ice Density (ID) and SD. SIT, SID, ID and SD are sensitive to environmental changes in the Polar region and impact the climate system. For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions it is required to have long term records of SIT, SID, SD and ID with errors and uncertainty analyses. The SID, SIT, ID and freeboard (F) have been retrieved from Radar Altimeter (RA) (on board ENVISAT) and IceBridge Laser Altimeter (LA) and validated, using over 10 years -collocated observations of SID and SD in the Arctic, provided from the European Space Agency (ESA CCI sea ice ECV project). Improved algorithms to retrieve SIT from LA and RA have been derived, applying statistical analysis. The snow depth is obtained from AMSR-E/Aqua and NASA IceBridge Snow Depth radar. The sea ice properties of pancake ice have been retrieved from ENVISAT/Synthetic Aperture Radar (ASAR). The uncertainties of the retrieved climate variables have been analysed and the impact of snow depth and sea ice density on retrieved SIT has been estimated. The sensitivity analysis illustrates the impact of uncertainties of input climate variables (ID and SD) on accuracy of the retrieved output variables (SIT and SID). The developed methodology of uncertainty and sensitivity analysis is essential for assessment of the impact of environmental variables on climate change and better understanding of the relationship between input and output variables. The uncertainty analysis quantifies the uncertainties of the model results and the sensitivity analysis evaluates the contribution of each input variable to
On the potential of uncertainty analysis for prediction of brake squeal propensity
NASA Astrophysics Data System (ADS)
Zhang, Zhi; Oberst, Sebastian; Lai, Joseph C. S.
2016-09-01
Brake squeal is a source of significant warranty-related claims for automotive manufacturers because it is annoying and is often perceived by customers as a safety concern. A brake squeal analysis is complex due to changing environmental and operating conditions, high sensitivity to manufacturing and assembly tolerances as well as the not so well understood role of nonlinearities. Although brake squeal is essentially a nonlinear problem, the standard analysis tool in industry is the linear complex eigenvalue analysis (CEA) which may under-predict or over-predict the number of unstable vibration modes. A nonlinear instability analysis is more predictive than CEA but is still computationally too expensive to be used routinely in industry for a full brake finite element model. Also, although the net work analysis of a linearised brake system has shown potential in predicting the origin of brake squeal, it has not been extensively used. In this study, the net work of an analytical viscously damped self-excited 4-dof friction oscillator with cubic contact force nonlinearity is compared with the instability prediction using the CEA and a nonlinear instability analysis. Results show that both the net work analysis and CEA under-predict the instability because of their inability to detect the sub-critical Hopf bifurcation. Then, the uncertainty analysis is applied to examine if it can improve instability prediction of a nonlinear system using linear methods and its limitations. By applying a variance-based global sensitivity analysis to parameters of the oscillator, suitable candidates for an uncertainty analysis are identified. Results of uncertainty analyses by applying polynomial chaos expansions to net work and CEA correlate well with those of the nonlinear analysis, hence demonstrating the potential of an uncertainty analysis in improving the prediction of brake squeal propensity using a linear method.
A Parallel Disintegrated Model for Uncertainty Analysis in Estimating Electrical Power Outage Areas
NASA Astrophysics Data System (ADS)
Omitaomu, O. A.
2008-05-01
extreme events may lead to model uncertainty, parameter uncertainty, and/or decision uncertainty. The type and source of uncertainty can dictate the methods for characterizing the uncertainty and its impact on effective disaster management strategies. Several techniques including sensitivity analysis, fuzzy sets theory, and Bayes' Theorem have been used for quantifying specific sources of uncertainty in various studies. However, these studies focus on individual areas of uncertainty and extreme weather. In this paper, we present some preliminary results in developing a parallel disintegrated model for uncertainty analysis with application to estimating electric power outage areas. The proposed model is disintegrated in the sense that each elements of the impacts assessment framework is assessed separately; and parallel since for each source of uncertainty a number of equivalent estimating models are implemented and evaluated. The objectives of the model include identifying the sources of uncertainty to be included in assessment model and determining the trade-offs in reducing the uncertainty due to major sources. The model would also be useful for uncertainty analysis of extreme weather impacts assessment to other critical infrastructures.
Damage functions for climate-related hazards: unification and uncertainty analysis
NASA Astrophysics Data System (ADS)
Prahl, Boris F.; Rybski, Diego; Boettle, Markus; Kropp, Jürgen P.
2016-05-01
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology
Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.
2008-01-01
Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
NASA Astrophysics Data System (ADS)
Pastore, Giovanni; Swiler, L. P.; Hales, J. D.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Luzzi, L.; Van Uffelen, P.; Williamson, R. L.
2015-01-01
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
NASA Astrophysics Data System (ADS)
Wei, Sha; Han, Qinkai; Peng, Zhike; Chu, Fulei
2016-05-01
Some system parameters in mechanical systems are always uncertain due to uncertainties in geometric and material properties, lubrication condition and wear. For a more reasonable estimation of dynamic analysis of the parametrically excited system, the effect of uncertain parameters should be taken into account. This paper presents a new non-probabilistic analysis method for solving the dynamic responses of parametrically excited systems under uncertainties and multi-frequency excitations. By using the multi-dimensional harmonic balance method (MHBM) and the Chebyshev inclusion function (CIF), an interval multi-dimensional harmonic balance method (IMHBM) is obtained. To illustrate the accuracy of the proposed method, a time-varying geared system of wind turbine with different kinds of uncertainties is demonstrated. By comparing with the results of the scanning method, it is shown that the presented method is valid and effective for the parametrically excited system with uncertainties and multi-frequency excitations. The effects of some uncertain system parameters including uncertain mesh stiffnesses and uncertain bearing stiffnesses on the frequency responses of the system are also discussed in detail. It is shown that the dynamic responses of the system are insensitive to the uncertain mesh stiffness and bearing stiffnesses of the planetary gear stage. The uncertain bearing stiffnesses of the intermediate and high-speed stages will lead to relatively large uncertainties in the dynamic responses around resonant regions. It will provide valuable guidance for the optimal design and condition monitoring of wind turbine gearboxes.
Uncertainty analysis of primary water pollutant control in China's pulp and paper industry.
Wen, Zong-guo; Di, Jing-han; Zhang, Xue-ying
2016-03-15
The total emission control target of water pollutants (e.g., COD and NH4-N) for a certain industrial sector can be predicted and analysed using the popular technology-based bottom-up modelling. However, this methodology has obvious uncertainty regarding the attainment of mitigation targets. The primary uncertainty comes from macro-production, pollutant reduction roadmap, and technical parameters. This research takes the paper and pulp industry in China as an example, and builds 5 mitigation scenarios via different combinations of raw material structure, scale structure, procedure mitigation technology, and end-of-pipe treatment technology. Using the methodology of uncertainty analysis via Monte Carlo, random sampling was conducted over a hundred thousand times. According to key parameters, sensitive parameters that impact total emission control targets such as industrial output, technique structure, cleaner production technology, and end-of-pipe treatment technology are discussed in this article. It appears that scenario uncertainty has a larger influence on COD emission than NH4-N, hence it is recommended that a looser total emission control target for COD is necessary to increase its feasibility and availability while maintaining the status quo of NH4-N. Consequently, from uncertainty analysis, this research recognizes the sensitive products, techniques, and technologies affecting industrial water pollution.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Coherent Uncertainty Analysis of Aerosol Measurements from Multiple Satellite Sensors
NASA Technical Reports Server (NTRS)
Petrenko, M.; Ichoku, C.
2013-01-01
Aerosol retrievals from multiple spaceborne sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS altogether, a total of 11 different aerosol products were comparatively analyzed using data collocated with ground-based aerosol observations from the Aerosol Robotic Network (AERONET) stations within the Multi-sensor Aerosol Products Sampling System (MAPSS, http://giovanni.gsfc.nasa.gov/mapss/ and http://giovanni.gsfc.nasa.gov/aerostat/). The analysis was performed by comparing quality-screened satellite aerosol optical depth or thickness (AOD or AOT) retrievals during 2006-2010 to available collocated AERONET measurements globally, regionally, and seasonally, and deriving a number of statistical measures of accuracy. We used a robust statistical approach to detect and remove possible outliers in the collocated data that can bias the results of the analysis. Overall, the proportion of outliers in each of the quality-screened AOD products was within 12%. Squared correlation coefficient (R2) values of the satellite AOD retrievals relative to AERONET exceeded 0.6, with R2 for most of the products exceeding 0.7 over land and 0.8 over ocean. Root mean square error (RMSE) values for most of the AOD products were within 0.15 over land and 0.09 over ocean. We have been able to generate global maps showing regions where the different products present advantages over the others, as well as the relative performance of each product over different landcover types. It was observed that while MODIS, MISR, and SeaWiFS provide accurate retrievals over most of the landcover types, multi-angle capabilities make MISR the only sensor to retrieve reliable AOD over barren and snow / ice surfaces. Likewise, active sensing enables CALIOP to retrieve aerosol properties over bright-surface shrublands more accurately than the other sensors, while POLDER, which is the only one of the sensors capable of measuring polarized aerosols, outperforms other sensors in
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
This paper investigates a method for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG analysis) for several lignocellulosic materials (ground olive stone, almond shell, pine pellets and oak pellets), completing previous work of the same authors. A comparison has been made between results of TG analysis and prompt analysis. Levels of uncertainty and errors were obtained, demonstrating that properties evaluated by TG analysis were representative of the overall fuel composition, and no correlation between prompt and TG analysis exists. Additionally, a study of trends and time correlations is indicated. These results are particularly interesting for biomass energy applications. PMID:21152292
Bayesian Uncertainty Analysis of PBPK Model Predictions for Permethrin in Rats
Uncertainty analysis of human physiologically-based pharmacokinetic (PBPK) model predictions can pose a significant challenge due to data limitations. As a result of these limitations, human models are often derived from extrapolated animal PBPK models, for which there is usuall...
42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.
Code of Federal Regulations, 2014 CFR
2014-10-01
... effectiveness factor or DDREF); and, the role of non-radiation risk factors (such as smoking history). 2 Draft... 42 Public Health 1 2014-10-01 2014-10-01 false Use of uncertainty analysis in NIOSH-IREP. 81.11 Section 81.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES...
42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.
Code of Federal Regulations, 2013 CFR
2013-10-01
... effectiveness factor or DDREF); and, the role of non-radiation risk factors (such as smoking history). 2 Draft... 42 Public Health 1 2013-10-01 2013-10-01 false Use of uncertainty analysis in NIOSH-IREP. 81.11 Section 81.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES...
42 CFR 81.11 - Use of uncertainty analysis in NIOSH-IREP.
Code of Federal Regulations, 2011 CFR
2011-10-01
... effectiveness factor or DDREF); and, the role of non-radiation risk factors (such as smoking history). 2 Draft... 42 Public Health 1 2011-10-01 2011-10-01 false Use of uncertainty analysis in NIOSH-IREP. 81.11 Section 81.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES...
Applying Uncertainty Analysis to a Risk Assessment for the Pesticide Permethrin
We discuss the application of methods of uncertainty analysis from our previous poster to the problem of a risk assessment for exposure to the food-use pesticide permethrin resulting from residential pesticide crack and crevice application. Exposures are simulated by the SHEDS (S...
FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)
This study applied Monte Carlo analysis with Latin
hypercube sampling to evaluate the effects of uncertainty
in air parcel trajectory paths, emissions, rate constants,
deposition affinities, mixing heights, and atmospheric stability
on predictions from a vertically...
Uncertainty analysis of an irrigation scheduling model for water management in crop production
Technology Transfer Automated Retrieval System (TEKTRAN)
Irrigation scheduling tools are critical to allow producers to manage water resources for crop production in an accurate and timely manner. To be useful, these tools need to be accurate, complete, and relatively reliable. The current work presents the uncertainty analysis and its results for the Mis...
1991-03-12
Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).
Technology Transfer Automated Retrieval System (TEKTRAN)
In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
A computational framework is presented for analyzing the uncertainty in model estimates of water quality benefits of best management practices (BMPs) in two small (<10 km^{2}) watersheds in Indiana. The analysis specifically recognizes the significance of the difference b...
James, Scott Carlton; Zimmerman, Dean Anthony
2003-10-01
Incorporating results from a previously developed finite element model, an uncertainty and parameter sensitivity analysis was conducted using preliminary site-specific data from Horonobe, Japan (data available from five boreholes as of 2003). Latin Hypercube Sampling was used to draw random parameter values from the site-specific measured, or approximated, physicochemical uncertainty distributions. Using pathlengths and groundwater velocities extracted from the three-dimensional, finite element flow and particle tracking model, breakthrough curves for multiple realizations were calculated with the semi-analytical, one-dimensional, multirate transport code, STAMMT-L. A stepwise linear regression analysis using the 5, 50, and 95% breakthrough times as the dependent variables and LHS sampled site physicochemical parameters as the independent variables was used to perform a sensitivity analysis. Results indicate that the distribution coefficients and hydraulic conductivities are the parameters responsible for most of the variation among simulated breakthrough times. This suggests that researchers and data collectors at the Horonobe site should focus on accurately assessing these parameters and quantifying their uncertainty. Because the Horonobe Underground Research Laboratory is in an early phase of its development, this work should be considered as a first step toward an integration of uncertainty and sensitivity analyses with decision analysis.
NASA Astrophysics Data System (ADS)
Gupta, Manika; Garg, Naveen Kumar; Srivastava, Prashant K.
2014-05-01
The sensitivity and uncertainty analysis has been carried out for the scalar parameters (soil hydraulic parameters (SHPs)), which govern the simulation of soil water content in the unsaturated soil zone. The study involves field experiments, which were conducted in real field conditions for wheat crop in Roorkee, India under irrigated conditions. Soil samples were taken for the soil profile of 60 cm depth at an interval of 15 cm in the experimental field to determine soil water retention curves (SWRCs). These experimentally determined SWRCs were used to estimate the SHPs by least square optimization under constrained conditions. Sensitivity of the SHPs estimated by various pedotransfer functions (PTFs), that relate various easily measurable soil properties like soil texture, bulk density and organic carbon content, is compared with lab derived parameters to simulate respective soil water retention curves. Sensitivity analysis was carried out using the monte carlo simulations and the one factor at a time approach. The different sets of SHPs, along with experimentally determined saturated permeability, are then used as input parameters in physically based, root water uptake model to ascertain the uncertainties in simulating soil water content. The generalised likelihood uncertainty estimation procedure (GLUE) was subsequently used to estimate the uncertainty bounds (UB) on the model predictions. It was found that the experimentally obtained SHPs were able to simulate the soil water contents with efficiencies of 70-80% at all the depths for the three irrigation treatments. The SHPs obtained from the PTFs, performed with varying uncertainties in simulating the soil water contents. Keywords: Sensitivity analysis, Uncertainty estimation, Pedotransfer functions, Soil hydraulic parameters, Hydrological modelling
Uncertainty analysis of the Measured Performance Rating (MPR) method. Final report
Not Available
1993-11-01
A report was commissioned by the New York State Energy Research and Development Authority and the Electric Power Research Institute to evaluate the uncertainties in the energy monitoring method known as measured performance rating (MPR). The work is intended to help further development of the MPR system by quantitatively analyzing the uncertainties in estimates of the heat loss coefficients and heating system efficiencies. The analysis indicates that the MPR should detect as little as a 7 percent change in the heat loss coefficients and heating system efficiencies. The analysis indicate that the MPR should be able to detect as little as a 7 percent change in the heat loss coefficient at 95 percent confidence level. MPR appears sufficiently robust for characterizing common weatherization treatments; e.g., increasing attic insulation from R-7 to R-19 in a typical single-story, 1,100 sq. ft. house resulting in a 19 percent reduction in heat loss coefficient. Furnace efficiency uncertainties ranged up to three times those of the heat loss coefficients. Measurement uncertainties (at the 95 percent confidence level) were estimated to be from 1 to 5 percent for heat loss coefficients and 1.5 percent for a typical furnace efficiency. The analysis also shows a limitation in applying MPR to houses with heating ducts in slabs on grade and to those with very large thermal mass. Most of the uncertainties encountered in the study were due more to the methods of estimating the ``true`` heat loss coefficients, furnace efficiency, and furnace fuel consumption (by collecting fuel bills and simulating two actual houses) than to the MPR approach. These uncertainties in the true parameter values become evidence for arguments in favor of the need of empirical measures of heat loss coefficient and furnace efficiency, like the MPR method, rather than arguments against.
U.S. Environmental Protection Agency radiogenic risk projections: uncertainty analysis.
Pawel, David J
2013-01-01
The U.S. Environmental Protection Agency (EPA) has updated its estimates of cancer risks due to low doses of ionizing radiation for the U.S. population, as well as their scientific basis. For the most part, these estimates were calculated using models recommended in the recent National Academy of Sciences' (BEIR VII) report on health effects from low levels of ionizing radiation. The new risk assessment includes uncertainty bounds associated with the projections for gender and cancer site-specific lifetime attributable risks. For most cancer sites, these uncertainty bounds were calculated using probability distributions for BEIR VII model parameter values, derived from a novel Bayesian analysis of cancer incidence data from the atomic bomb survivor lifespan study (LSS) cohort and subjective distributions for other relevant sources of uncertainty. This approach allowed for quantification of uncertainties associated with: 1) the effect of sampling variability on inferences drawn from the LSS cohort about the linear dose response and its dependence on temporal factors such as age-at-exposure, 2) differences in the radiogenic risks in the Japanese LSS cohort versus the U.S. population, 3) dosimetry errors, and 4) several other non-sampling sources. Some of the uncertainty associated with how risk depends on dose and dose rate was also quantified. For uniform whole-body exposures of low-dose gamma radiation to the entire population, EPA's cancer incidence risk coefficients and corresponding 90% uncertainty intervals (Gy) are 9.55 × 10 (4.3 × 10 to 1.8 × 10) for males and 1.35 × 10 (6.5 × 10 to 2.5 × 10) for females, where the numbers in parentheses represent an estimated 90% uncertainty interval. For many individual cancer sites, risk coefficients differ from corresponding uncertainty bounds by factors of about three to five, although uncertainties are larger for cancers of the stomach, prostate, liver, and uterus. Uncertainty intervals for many, but not all
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
a Quality Analysis and Uncertainty Modeling Approach for Crowd-Sourcing Location Check-In Data
NASA Astrophysics Data System (ADS)
Zhou, M.; Hu, Q.; Wang, M.
2013-05-01
The location check-in data, developing along with social network, are considered as user-generated crowd-sourcing geospatial data. With massive data volume, abundance in contained information, and high up-to-date status, the check-in data provide a new data source for geographic information service represented by location-based service. However, there is a significant quality issue regarding to crowd-sourcing data, which has a direct influence to data availability. In this paper, a data quality analysis approach is designed for the location check-in data and a check-in data uncertainty model is proposed. First of all, the quality issue of location check-in data is discussed. Then, according to the characteristics of check-in data, a location check-in data quality analysis and data processing approach is proposed, using certain standard dataset as reference to conduct an affine transformation for the check-in dataset, during which the RANSAC algorithm is adopted for outlier elimination. Subsequently, combining GIS data uncertainty theory, an uncertainty model of processed check-in data is set up. At last, using location check-in data obtained from jiepang.com as experimental data and selected navigation data as data standard, multiple location check-in data quality analysis and uncertainty modeling experiments are conducted. By comprehensive analysis of experimental results, the feasibility of proposed location checkin data quality analysis and process approach and the availability of proposed uncertainty model are verified. The novel approach is proved to have a certain practical significance to the study of the quality issue of crowd-sourcing geographic data.
How uncertainty analysis in ecological risk assessment is used in the courtroom
Hacker, C.; Watson, J.
1995-12-31
The prevalence of uncertainty analysis in environmental decision-making is increasing. Specific methods for estimating and expressing uncertainty are available and continually being improved. Although these methods are intended to provide a measure of the suitability of the data upon which a decision is based, their application in litigation may result in outcomes that are unanticipated by some in the scientific community. This divergence between those estimating uncertainty in assessing ecological risk and those judging its application can be attributed in part to the different ways evidence is used in science and law. This presentation will explain how scientific evidence is used in the courtroom. This explanation will use examples from case law to describe how courts decide who can be qualified to present evidence, what evidence can be presented, and how this evidence will be used in reaching a decision.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.
Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)
2001-01-01
A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.
Uncertainty Analysis on Heat Transfer Correlations for RP-1 Fuel in Copper Tubing
NASA Technical Reports Server (NTRS)
Driscoll, E. A.; Landrum, D. B.
2004-01-01
NASA is studying kerosene (RP-1) for application in Next Generation Launch Technology (NGLT). Accurate heat transfer correlations in narrow passages at high temperatures and pressures are needed. Hydrocarbon fuels, such as RP-1, produce carbon deposition (coke) along the inside of tube walls when heated to high temperatures. A series of tests to measure the heat transfer using RP-1 fuel and examine the coking were performed in NASA Glenn Research Center's Heated Tube Facility. The facility models regenerative cooling by flowing room temperature RP-1 through resistively heated copper tubing. A Regression analysis is performed on the data to determine the heat transfer correlation for Nusselt number as a function of Reynolds and Prandtl numbers. Each measurement and calculation is analyzed to identify sources of uncertainty, including RP-1 property variations. Monte Carlo simulation is used to determine how each uncertainty source propagates through the regression and an overall uncertainty in predicted heat transfer coefficient. The implications of these uncertainties on engine design and ways to minimize existing uncertainties are discussed.
An Uncertainty Analysis for Predicting Soil Profile Salinity Using EM Induction Data
NASA Astrophysics Data System (ADS)
Huang, Jingyi; Monteiro Santos, Fernando; Triantafilis, John
2016-04-01
Proximal soil sensing techniques such as electromagnetic (EM) induction have been used to identify and map the areal variation of average soil properties. However, soil varies with depth owing to the action of various soil forming factors (e.g., parent material and topography). In this work we collected EM data using an EM38 and EM34 meter along a 22-km transect in the Trangie District, Australia.We jointly inverted these data using EM4Soil software and compare our 2-dimensional model of true electrical conductivity (sigma - mS/m) with depth against measured electrical conductivity of a saturated soil-paste extract (ECe - dS/m) at depth of 0-16 m. Through the use of a linear regression (LR) model and by varying forward modelling algorithms (cumulative function and full solution), inversion algorithms (S1 and S2), and damping factor (lambda) we determined a suitable electromagnetic conductivity image (EMCI) which was optimal when using the full solution, S2 and lambda = 0.6. To evaluate uncertainty of the inversion process and the LR model, we conducted an uncertainty analysis. The distribution of the model misfit shows the largest uncertainty caused by inversion (mostly due to EM34-40) occurs at deeper profiles while the largest uncertainty of the LR model occurs where the soil profile is most saline. These uncertainty maps also illustrate us how the model accuracy can be improved in the future.
NASA Astrophysics Data System (ADS)
Lü, Hongliang; Boilley, David; Abe, Yasuhisa; Shen, Caiwan
2016-09-01
Background: Synthesis of superheavy elements is performed by heavy-ion fusion-evaporation reactions. However, fusion is known to be hindered with respect to what can be observed with lighter ions. Thus some delicate ambiguities remain on the fusion mechanism that eventually lead to severe discrepancies in the calculated formation probabilities coming from different fusion models. Purpose: In the present work, we propose a general framework based upon uncertainty analysis in the hope of constraining fusion models. Method: To quantify uncertainty associated with the formation probability, we propose to propagate uncertainties in data and parameters using the Monte Carlo method in combination with a cascade code called kewpie2, with the aim of determining the associated uncertainty, namely the 95 % confidence interval. We also investigate the impact of different models or options, which cannot be modeled by continuous probability distributions, on the final results. An illustrative example is presented in detail and then a systematic study is carried out for a selected set of cold-fusion reactions. Results: It is rigorously shown that, at the 95 % confidence level, the total uncertainty of the empirical formation probability appears comparable to the discrepancy between calculated values. Conclusions: The results obtained from the present study provide direct evidence for predictive limitations of the existing fusion-evaporation models. It is thus necessary to find other ways to assess such models for the purpose of establishing a more reliable reaction theory, which is expected to guide future experiments on the production of superheavy elements.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo
NASA Astrophysics Data System (ADS)
Qin, Junsong; Liu, Bingyi; Niu, Dongxiao
By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac
1987-01-01
A preliminary uncertainty analysis has been performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis are presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.
Spline analysis of Holocene sediment magnetic records: Uncertainty estimates for field modeling
NASA Astrophysics Data System (ADS)
Panovska, S.; Finlay, C. C.; Donadini, F.; Hirt, A. M.
2012-02-01
Sediment and archeomagnetic data spanning the Holocene enable us to reconstruct the evolution of the geomagnetic field on time scales of centuries to millennia. In global field modeling the reliability of data is taken into account by weighting according to uncertainty estimates. Uncertainties in sediment magnetic records arise from (1) imperfections in the paleomagnetic recording processes, (2) coring and (sub) sampling methods, (3) adopted averaging procedures, and (4) uncertainties in the age-depth models. We take a step toward improved uncertainty estimates by performing a comprehensive statistical analysis of the available global database of Holocene magnetic records. Smoothing spline models that capture the robust aspects of individual records are derived. This involves a cross-validation approach, based on an absolute deviation measure of misfit, to determine the smoothing parameter for each spline model, together with the use of a minimum smoothing time derived from the sedimentation rate and assumed lock-in depth. Departures from the spline models provide information concerning the random variability in each record. Temporal resolution analysis reveals that 50% of the records have smoothing times between 80 and 250 years. We also perform comparisons among the sediment magnetic records and archeomagnetic data, as well as with predictions from the global historical and archeomagnetic field models. Combining these approaches, we arrive at individual uncertainty estimates for each sediment record. These range from 2.5° to 11.2° (median: 5.9°; interquartile range: 5.4° to 7.2°) for inclination, 4.1° to 46.9° (median: 13.4°; interquartile range: 11.4° to 18.9°) for relative declination, and 0.59 to 1.32 (median: 0.93; interquartile range: 0.86 to 1.01) for standardized relative paleointensity. These values suggest that uncertainties may have been underestimated in previous studies. No compelling evidence for systematic inclination shallowing is
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.
2016-01-01
Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.
Uncertainty of the sample size reduction step in pesticide residue analysis of large-sized crops.
Omeroglu, P Yolci; Ambrus, Á; Boyacioglu, D; Majzik, E Solymosne
2013-01-01
To estimate the uncertainty of the sample size reduction step, each unit in laboratory samples of papaya and cucumber was cut into four segments in longitudinal directions and two opposite segments were selected for further homogenisation while the other two were discarded. Jackfruit was cut into six segments in longitudinal directions, and all segments were kept for further analysis. To determine the pesticide residue concentrations in each segment, they were individually homogenised and analysed by chromatographic methods. One segment from each unit of the laboratory sample was drawn randomly to obtain 50 theoretical sub-samples with an MS Office Excel macro. The residue concentrations in a sub-sample were calculated from the weight of segments and the corresponding residue concentration. The coefficient of variation calculated from the residue concentrations of 50 sub-samples gave the relative uncertainty resulting from the sample size reduction step. The sample size reduction step, which is performed by selecting one longitudinal segment from each unit of the laboratory sample, resulted in relative uncertainties of 17% and 21% for field-treated jackfruits and cucumber, respectively, and 7% for post-harvest treated papaya. The results demonstrated that sample size reduction is an inevitable source of uncertainty in pesticide residue analysis of large-sized crops. The post-harvest treatment resulted in a lower variability because the dipping process leads to a more uniform residue concentration on the surface of the crops than does the foliar application of pesticides.
Sensitivity Analysis and Insights into Hydrological Processes and Uncertainty at Different Scales
NASA Astrophysics Data System (ADS)
Haghnegahdar, A.; Razavi, S.; Wheater, H. S.; Gupta, H. V.
2015-12-01
Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, and conducting model calibration and uncertainty assessment. Numerous techniques have been used in environmental modelling studies for sensitivity analysis. However, it is often overlooked that the scale of modelling study, and the metric choice can significantly change the assessment of model sensitivity and uncertainty. In order to identify important hydrological processes across various scales, we conducted a multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three different hydrological models, HydroGeoSphere (HGS), Soil and Water Assessment Tool (SWAT), and Modélisation Environmentale-Surface et Hydrologie (MESH). Models were applied at various scales ranging from small (hillslope) to large (watershed) scales. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics such as high flows, low flows, and volume. We demonstrate how the scale of the case study and the choice of sensitivity metric(s) can change our assessment of sensitivity and uncertainty. We present some guidelines to better align the metric choice with the objective and scale of a modelling study.
Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi
2015-04-01
The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be
Biophysical and Economic Uncertainty in the Analysis of Poverty Impacts of Climate Change
NASA Astrophysics Data System (ADS)
Hertel, T. W.; Lobell, D. B.; Verma, M.
2011-12-01
This paper seeks to understand the main sources of uncertainty in assessing the impacts of climate change on agricultural output, international trade, and poverty. We incorporate biophysical uncertainty by sampling from a distribution of global climate model predictions for temperature and precipitation for 2050. The implications of these realizations for crop yields around the globe are estimated using the recently published statistical crop yield functions provided by Lobell, Schlenker and Costa-Roberts (2011). By comparing these yields to those predicted under current climate, we obtain the likely change in crop yields owing to climate change. The economic uncertainty in our analysis relates to the response of the global economic system to these biophysical shocks. We use a modified version of the GTAP model to elicit the impact of the biophysical shocks on global patterns of production, consumption, trade and poverty. Uncertainty in these responses is reflected in the econometrically estimated parameters governing the responsiveness of international trade, consumption, production (and hence the intensive margin of supply response), and factor supplies (which govern the extensive margin of supply response). We sample from the distributions of these parameters as specified by Hertel et al. (2007) and Keeney and Hertel (2009). We find that, even though it is difficult to predict where in the world agricultural crops will be favorably affected by climate change, the responses of economic variables, including output and exports can be far more robust (Table 1). This is due to the fact that supply and demand decisions depend on relative prices, and relative prices depend on productivity changes relative to other crops in a given region, or relative to similar crops in other parts of the world. We also find that uncertainty in poverty impacts of climate change appears to be almost entirely driven by biophysical uncertainty.
Variability and Uncertainty in Probabilistic Seismic Hazard Analysis for the Island of Montreal
NASA Astrophysics Data System (ADS)
Elkady, Ahmed Mohamed Ahmed
The current seismic design process for structures in Montreal is based on the 2005 edition of the National Building Code of Canada (NBCC 2005) which is based on a hazard level corresponding to a probability of exceedence of 2% in 50 years. The code is based on the Uniform Hazard Spectrum (UHS) and deaggregation values obtained by Geological Survey of Canada (GSC) modified version of F-RISK software and were obtained by a process that did not formally consider epistemic uncertainty. Epistemic uncertainty is related to the uncertainty in model formulation. A seismological model consists of seismic sources (source geometry, source location, recurrence rate, magnitude distribution, and maximum magnitude) and a Ground-Motion Prediction Equation (GMPE). In general, and particularly Montreal, GMPEs are the main source of epistemic uncertainty with respect to other variables of seismological the model. The objective of this thesis is to use CRISIS software to investigate the effect of epistemic uncertainty on probabilistic seismic hazard analysis (PSHA) products like the UHS and deaggregation values by incorporating different new GMPEs. The epsilon "epsilon" parameter is also discussed which represents the departure of the target ground motion from that predicted by the GMPE as it is not very well documented in Eastern Canada. A method is proposed to calculate epsilon values for Montreal relative to a given GMPE and to calculate robust weighted modal epsilon values when epistemic uncertainty is considered. Epsilon values are commonly used in seismic performance evaluations for identifying design events and selecting ground motion records for vulnerability and liquefaction studies. A brief overview of record epsilons is also presented which accounts for the spectral shape of the ground motion time history is also presented.
Quantification of margins and uncertainty for risk-informed decision analysis.
Alvin, Kenneth Fredrick
2010-09-01
QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.
Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories
NASA Astrophysics Data System (ADS)
Way, David Wesley
2001-10-01
Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on
NASA Astrophysics Data System (ADS)
Wang, Wenqin; Nguang, Sing Kiong; Zhong, Shouming; Liu, Feng
2014-05-01
This study examines the problem of robust stability of uncertain stochastic genetic regulatory networks with time-varying delays. The system's uncertainties are modeled as both polytopic form and structured linear fractional form. Based on a novel augmented Lyapunov-Krasovskii functional and different integral approaches, new stability conditions have been derived. Furthermore, these stability criteria can be applicable to both fast and slow time-varying delays. Finally, a numerical example is presented to illustrate the effectiveness of the proposed stability conditions.
NASA Astrophysics Data System (ADS)
Bonadonna, Costanza; Biass, Sébastien; Costa, Antonio
2015-04-01
Regardless of the recent advances in geophysical monitoring and real-time quantitative observations of explosive volcanic eruptions, the characterization of tephra deposits remains one of the largest sources of information on Eruption Source Parameters (ESPs) (i.e. plume height, erupted volume/mass, Mass Eruption Rate - MER, eruption duration, Total Grain-Size Distribution - TGSD). ESPs are crucial for the characterization of volcanic systems and for the compilation of comprehensive hazard scenarios but are naturally associated with various degrees of uncertainties that are traditionally not well quantified. Recent studies have highlighted the uncertainties associated with the estimation of ESPs mostly related to: i) the intrinsic variability of the natural system, ii) the observational error and iii) the strategies used to determine physical parameters. Here we review recent studies focused on the characterization of these uncertainties and we present a sensitivity analysis for the determination of ESPs and a systematic investigation to quantify the propagation of uncertainty applied to two case studies. In particular, we highlight the dependence of ESPs on specific observations used as input parameters (i.e. diameter of the largest clasts, thickness measurements, area of isopach contours, deposit density, downwind and crosswind range of isopleth maps, and empirical constants and wind speed for the determination of MER). The highest uncertainty is associated to the estimation of MER and eruption duration and is related to the determination of crosswind range of isopleth maps and the empirical constants used in the empirical parameterization relating MER and plume height. Given the exponential nature of the relation between MER and plume height, the propagation of uncertainty is not symmetrical, and both an underestimation of the empirical constant and an overestimation of plume height have the highest impact on the final outcome. A ± 20% uncertainty on thickness
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure.
[Uncertainty analysis of water environmental capacity in the nonpoint source polluted river].
Chen, Ding-jiang; Lü, Jun; Jin, Pei-jian; Shen, Ye-na; Shi, Yi-ming; Gong, Dong-qin
2010-05-01
Based on the one-dimension model for river water environmental capacity (WEC) and the statistical analysis of the measured hydrological and water quality variables, a uncertainty analysis method for the WEC in nonpoint source polluted river was developed, which included the sensitivity analysis for input parameters of the model and the probability distributions analysis for the WEC using Monte Carlo simulation approach. The method, which described the uncertainty derived from the adopted information of the river system and the randomicity from the occurrence of nonpoint source pollution, could provide different WEC combined with reliabilities for different hydrological seasons. As a case study, the total nitrogen (TN) WEC in the Changle River located in southeast China was calculated using the method. Results indicated that the TN WEC with 90% of reliability were 487.9, 949.8 and 1392.8 kg x d(-1) in dry season, average season and flood season, respectively, and the dilution effect of river water flow accounted for the main content of WEC. In order to satisfy water quality target of the river, about 1258.3-3591.2 kg x d(-1) of current TN quantity that entered into the river should be reduced in watershed, and the largest reducing quantity of TN was occurred during flood season. The uncertainty method, which reflected hydrology and water quality variations in the nonpoint source polluted river, provided a more reliable and efficient method for the WEC calculation.
Haihua Zhao; Vincent A. Mousseau
2012-08-01
Since the Code Scaling, Applicability, and Uncertainty (CSAU) methodology was proposed about two decades ago, it has been widely used for new reactor designs and existing LWRs power uprates. In spite of these huge successes, CSAU has been criticized for the need of further improvement, focusing on two main issues - lack of objectiveness and high cost. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the partial differential equations for parameter sensitivities. Moreover, our work shows that time and space steps can be treated as special sensitivity parameters so that numerical errors can be directly compared with physical uncertainties. When the FSA is implemented in a new advanced system analysis code, CSAU could be significantly improved by quantifying numerical errors and allowing a quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency. This paper will review the issues related to the current CSAU implementations, introduce FSA, show a simple example to perform FSA, and discuss potential improvements on CSAU with FSA. Finally, the general research direction and requirements to use FSA in an advanced system analysis code will be discussed.
Bio-physical vs. Economic Uncertainty in the Analysis of Climate Change Impacts on World Agriculture
NASA Astrophysics Data System (ADS)
Hertel, T. W.; Lobell, D. B.
2010-12-01
Accumulating evidence suggests that agricultural production could be greatly affected by climate change, but there remains little quantitative understanding of how these agricultural impacts would affect economic livelihoods in poor countries. The recent paper by Hertel, Burke and Lobell (GEC, 2010) considers three scenarios of agricultural impacts of climate change, corresponding to the fifth, fiftieth, and ninety fifth percentiles of projected yield distributions for the world’s crops in 2030. They evaluate the resulting changes in global commodity prices, national economic welfare, and the incidence of poverty in a set of 15 developing countries. Although the small price changes under the medium scenario are consistent with previous findings, their low productivity scenario reveals the potential for much larger food price changes than reported in recent studies which have hitherto focused on the most likely outcomes. The poverty impacts of price changes under the extremely adverse scenario are quite heterogeneous and very significant in some population strata. They conclude that it is critical to look beyond central case climate shocks and beyond a simple focus on yields and highly aggregated poverty impacts. In this paper, we conduct a more formal, systematic sensitivity analysis (SSA) with respect to uncertainty in the biophysical impacts of climate change on agriculture, by explicitly specifying joint distributions for global yield changes - this time focusing on 2050. This permits us to place confidence intervals on the resulting price impacts and poverty results which reflect the uncertainty inherited from the biophysical side of the analysis. We contrast this with the economic uncertainty inherited from the global general equilibrium model (GTAP), by undertaking SSA with respect to the behavioral parameters in that model. This permits us to assess which type of uncertainty is more important for regional price and poverty outcomes. Finally, we undertake a
NASA Astrophysics Data System (ADS)
Ge, Q.-S.; Zheng, J.-Y.; Hao, Z.-X.; Shao, X.-M.; Wang, Wei-Chyung; Luterbacher, Juerg
2010-02-01
Twenty-three published proxy temperature series over China spanning the last 2000 years were selected for an uncertainty analysis in five climate regions. Results indicated that, although large uncertainties are found for the period prior to the 16th century, high level of consistency were identified in all regions during the recent 500-years, highlighted by the two cold periods 1620s-1710s and 1800s-1860s, and the warming during the 20th century. The latter started in Tibet, Northwest and Northeast, and migrated to Central East and Southeast. The analysis also indicates that the warming during the 10-14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century which was unprecedented within the past 500 years.
Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín
2010-01-01
Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506
Usmani, S.A.; Baughman, P.D.
1996-12-01
The seismic analysis of the CANDU nuclear power plant is governed by Canadian Standard series N289. However, the dynamic analysis of some equipment and system such as the CANDU reactor and fueling machine must treat unique components not directly covered by the broad recommendations of these standards. This paper looks at the damping values and treatment of modeling uncertainty recommended by CSA N289.3, the current state of knowledge and expert opinion as reflected in several current standards, testing results, and the unique aspects of the CANDU system. Damping values are recommended for the component parts of the CANDU reactor and fueling machine system: reactor building, calandria vault, calandria, fuel channel, pressure tube, fueling machine and support structure. Recommendations for treatment of modeling and other uncertainties are also presented.
Chen, Lei; Han, Zhaoxing; Wang, Guobo; Shen, Zhenyao
2016-01-01
Conventional effluent trading systems (ETSs) between point sources (PSs) and nonpoint sources (NPSs) are often unreliable because of the uncertain characteristics of NPSs. In this study, a new framework was established for PS-NPS ETSs, and a comprehensive analysis was conducted by quantifying the impacts of the uncertainties associated with the water assimilative capacity (WAC), NPS emissions, and measurement effectiveness. On the basis of these results, the uncertain characteristics of NPSs would result in a less cost-effective PS-NPS ETS during most hydrological periods, and there exists a clear transition occurs from the WAC constraint to the water quality constraint if these stochastic factors are considered. Specifically, the emission uncertainty had a greater impact on PSs, but an increase in the emission or abatement uncertainty caused the abatement efforts to shift from NPSs toward PSs. Moreover, the error transitivity from the WAC to conventional ETS approaches is more obvious than that to the WEFZ-based ETS. When NPSs emissions are relatively high, structural BMPs should be considered for trading, and vice versa. These results are critical to understand the impacts of uncertainty on the functionality of PS-NPS ETSs and to provide a trade-off between the confidence level and abatement efforts. PMID:27406070
NASA Astrophysics Data System (ADS)
Oates, William S.; Miles, Paul; Leon, Lider; Smith, Ralph
2016-04-01
Density functional theory (DFT) provides exceptional predictions of material properties of ideal crystal structures such as elastic modulus and dielectric constants. This includes ferroelectric crystals where excellent predictions of spontaneous polarization, lattice strain, and elastic moduli have been predicted using DFT. Less analysis has focused on quantifying uncertainty of the energy landscape over a broad range of polarization states in ferroelectric materials. This is non-trivial because the degrees of freedom contained within a unit cell are reduced to a single vector order parameter which is normally polarization. For example, lead titanate contains five atoms and 15 degrees of freedom of atomic nuclei motion which contribute to the overall unit cell polarization. Bayesian statistics is used to identify the uncertainty and propagation of error of a continuum scale, Landau energy function for lead titanate. Uncertainty in different parameters is quantified and this uncertainty is propagated through the model to illustrate error propagation over the energy surface. Such results are shown to have an impact in integration of quantum simulations within a ferroelectric phase field continuum modeling framework.
Uncertainty analysis for an effluent trading system in a typical nonpoint-sources-polluted watershed
Chen, Lei; Han, Zhaoxing; Wang, Guobo; Shen, Zhenyao
2016-01-01
Conventional effluent trading systems (ETSs) between point sources (PSs) and nonpoint sources (NPSs) are often unreliable because of the uncertain characteristics of NPSs. In this study, a new framework was established for PS-NPS ETSs, and a comprehensive analysis was conducted by quantifying the impacts of the uncertainties associated with the water assimilative capacity (WAC), NPS emissions, and measurement effectiveness. On the basis of these results, the uncertain characteristics of NPSs would result in a less cost-effective PS-NPS ETS during most hydrological periods, and there exists a clear transition occurs from the WAC constraint to the water quality constraint if these stochastic factors are considered. Specifically, the emission uncertainty had a greater impact on PSs, but an increase in the emission or abatement uncertainty caused the abatement efforts to shift from NPSs toward PSs. Moreover, the error transitivity from the WAC to conventional ETS approaches is more obvious than that to the WEFZ-based ETS. When NPSs emissions are relatively high, structural BMPs should be considered for trading, and vice versa. These results are critical to understand the impacts of uncertainty on the functionality of PS-NPS ETSs and to provide a trade-off between the confidence level and abatement efforts. PMID:27406070
Uncertainty analysis for an effluent trading system in a typical nonpoint-sources-polluted watershed
NASA Astrophysics Data System (ADS)
Chen, Lei; Han, Zhaoxing; Wang, Guobo; Shen, Zhenyao
2016-07-01
Conventional effluent trading systems (ETSs) between point sources (PSs) and nonpoint sources (NPSs) are often unreliable because of the uncertain characteristics of NPSs. In this study, a new framework was established for PS-NPS ETSs, and a comprehensive analysis was conducted by quantifying the impacts of the uncertainties associated with the water assimilative capacity (WAC), NPS emissions, and measurement effectiveness. On the basis of these results, the uncertain characteristics of NPSs would result in a less cost-effective PS-NPS ETS during most hydrological periods, and there exists a clear transition occurs from the WAC constraint to the water quality constraint if these stochastic factors are considered. Specifically, the emission uncertainty had a greater impact on PSs, but an increase in the emission or abatement uncertainty caused the abatement efforts to shift from NPSs toward PSs. Moreover, the error transitivity from the WAC to conventional ETS approaches is more obvious than that to the WEFZ-based ETS. When NPSs emissions are relatively high, structural BMPs should be considered for trading, and vice versa. These results are critical to understand the impacts of uncertainty on the functionality of PS-NPS ETSs and to provide a trade-off between the confidence level and abatement efforts.
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-01
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.
Tang, Jaw-Luen; Wang, Jian-Neng
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-01
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. PMID:26619051
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
NASA Astrophysics Data System (ADS)
Magri, Luca; Bauerheim, Michael; Nicoud, Franck; Juniper, Matthew P.
2016-11-01
Monte Carlo and Active Subspace Identification methods are combined with first- and second-order adjoint sensitivities to perform (forward) uncertainty quantification analysis of the thermo-acoustic stability of two annular combustor configurations. This method is applied to evaluate the risk factor, i.e., the probability for the system to be unstable. It is shown that the adjoint approach reduces the number of nonlinear-eigenproblem calculations by as much as the Monte Carlo samples.
Da Cruz, D. F.; Rochman, D.; Koning, A. J.
2012-07-01
This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {sup 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.
Nasif, Hesham; Neyama, Atsushi
2003-02-26
This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).
Uncertainty analysis of an IGCC system with single-stage entrained-flow gasifier
Shastri, Y.; Diwekar, U.; Zitney, S.
2008-01-01
Integrated Gasification Combined Cycle (IGCC) systems using coal gasification is an attractive option for future energy plants. Consequenty, understanding the system operation and optimizing gasifier performance in the presence of uncertain operating conditions is essential to extract the maximum benefits from the system. This work focuses on conducting such a study using an IGCC process simulation and a high-fidelity gasifier simulation coupled with stochastic simulation and multi-objective optimization capabilities. Coal gasifiers are the necessary basis of IGCC systems, and hence effective modeling and uncertainty analysis of the gasification process constitutes an important element of overall IGCC process design and operation. In this work, an Aspen Plus{reg_sign} steady-state process model of an IGCC system with carbon capture enables us to conduct simulation studies so that the effect of gasification variability on the whole process can be understood. The IGCC plant design consists of an single-stage entrained-flow gasifier, a physical solvent-based acid gas removal process for carbon capture, two model-7FB combustion turbine generators, two heat recovery steam generators, and one steam turbine generator in a multi-shaft 2x2x1 configuration. In the Aspen Plus process simulation, the gasifier is represented as a simplified lumped-parameter, restricted-equilibrium reactor model. In this work, we also make use of a distributed-parameter FLUENT{reg_sign} computational fluid dynamics (CFD) model to characterize the uncertainty for the entrained-flow gasifier. The CFD-based gasifer model is much more comprehensive, predictive, and hence better suited to understand the effects of uncertainty. The possible uncertain parameters of the gasifier model are identified. This includes input coal composition as well as mass flow rates of coal, slurry water, and oxidant. Using a selected number of random (Monte Carlo) samples for the different parameters, the CFD model is
NASA Astrophysics Data System (ADS)
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-05-01
Evapotranspiration (ET) is an important component of the water cycle - ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001-2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
Bayesian analysis of stage-fall-discharge rating curves and their uncertainties
NASA Astrophysics Data System (ADS)
Mansanarez, Valentin; Le Coz, Jérôme; Renard, Benjamin; Lang, Michel; Pierrefeu, Gilles; Le Boursicaud, Raphaël; Pobanz, Karine
2016-04-01
Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. Building on existing Bayesian approaches, we introduce an original hydraulics-based method for developing SFD rating curves used at twin gauge stations and estimating their uncertainties. Conventional power functions for channel and section controls are used, and transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The difference between the reference levels at the two stations is estimated as another uncertain parameter of the SFD model. The method proposed in this presentation incorporates information from both the hydraulic knowledge (equations of channel or section controls) and the information available in the stage-fall-discharge observations (gauging data). The obtained total uncertainty combines the parametric uncertainty and the remnant uncertainty related to the model of rating curve. This method provides a direct estimation of the physical inputs of the rating curve (roughness, width, slope bed, distance between twin gauges, etc.). The performance of the new method is tested using an application case affected by the variable backwater of a run-of-the-river dam: the Rhône river at Valence, France. In particular, a sensitivity analysis to the prior information and to the gauging dataset is performed. At that site, the stage-fall-discharge domain is well documented with gaugings conducted over a range of backwater affected and unaffected conditions. The performance of the new model was deemed to be satisfactory. Notably, transition to uniform flow when the overall range of the auxiliary stage is gauged is correctly simulated. The resulting curves are in good agreement with the observations (gaugings) and their uncertainty envelopes are acceptable for computing streamflow records. Similar
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
[Evaluation of uncertainty in measurement of radiated disturbance and analysis of the result].
Wang, Weiming; Jiang, Sui
2012-03-01
This paper evaluates the uncertainty in the measurement of radiated disturbance by analyzing and calculating the components that influence the uncertainty. And the effectiveness of the uncertainty testing has been confirmed through the ability validation.
Automouse: An improvement to the mouse computerized uncertainty analysis system operational manual
Klee, A.J.
1992-08-01
Under a mandate of national environmental laws, the agency strives to formulate and implement actions leading to a compatible balance between human activities and the ability of natural systems to support and nurture life. The Risk Reduction Engineering Laboratory is responsible for planning, implementing, and managing research development, and demonstration programs to provide an authoritative, defensible engineering basis in support of the policies, programs, and regulations of the EPA with respect to drinking water, wastewater, pesticides, toxic substances, solid and hazardous wastes, and Superfund-related activities. The publication is one of the products of that research and provides a vital communication link between the researcher and the user community. The manual describes a system, called MOUSE (for Modular Oriented Uncertainty SystEm), for dealing with the computational problems of uncertainty, specifically in models that consist of a set of one or more equations. Since such models are frequently encountered in the fields of environmental science, risk analysis, economics, and engineering, the system has broad application throughout these fields. An important part of the MOUSE system is AutoMOUSE which actually writes the computer programs required for the uncertainty analysis computations. Thus, no prior programming knowledge is needed to learn or use MOUSE and, because of its transportability and compactness, the system can be run on a wide variety of personal computers available to the U.S. Environmental Protection Agency and/or its contractors and grantees.
Reduced Uncertainties in the Flutter Analysis of the Aerostructures Test Wing
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Lung, Shun-fat
2010-01-01
Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. A test validated finite element model can provide a reliable flutter analysis to define the flutter placard speed to which the aircraft can be flown prior to flight flutter testing. Minimizing the difference between numerical and experimental results is a type of optimization problem. Through the use of the National Aeronautics and Space Administration Dryden Flight Research Center s (Edwards, California, USA) multidisciplinary design, analysis, and optimization tool to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes are matched to the target data and the mass matrix orthogonality is retained. The approach in this study has been applied to minimize the model uncertainties for the structural dynamic model of the aerostructures test wing, which was designed, built, and tested at the National Aeronautics and Space Administration Dryden Flight Research Center. A 25-percent change in flutter speed has been shown after reducing the uncertainties
Reduced Uncertainties in the Flutter Analysis of the Aerostructures Test Wing
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun Fat
2011-01-01
Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. A test validated finite element model can provide a reliable flutter analysis to define the flutter placard speed to which the aircraft can be flown prior to flight flutter testing. Minimizing the difference between numerical and experimental results is a type of optimization problem. Through the use of the National Aeronautics and Space Administration Dryden Flight Research Center's (Edwards, California) multidisciplinary design, analysis, and optimization tool to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes are matched to the target data, and the mass matrix orthogonality is retained. The approach in this study has been applied to minimize the model uncertainties for the structural dynamic model of the aerostructures test wing, which was designed, built, and tested at the National Aeronautics and Space Administration Dryden Flight Research Center. A 25 percent change in flutter speed has been shown after reducing the uncertainties.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of
NASA Astrophysics Data System (ADS)
Velders, G. J. M.; Daniel, J. S.
2014-03-01
The rates at which ozone-depleting substances (ODSs) are removed from the atmosphere, which determine the lifetimes of these ODSs, are key factors for determining the rate of ozone layer recovery in the coming decades. We present here a comprehensive uncertainty analysis of future mixing ratios of ODSs, levels of equivalent effective stratospheric chlorine (EESC), ozone depletion potentials, and global warming potentials (GWPs), using, among other information, the 2013 WCRP/SPARC (World Climate Research Programme/Stratospheric Processes and their Role in Climate) assessment of lifetimes of ODSs and their uncertainties. The year EESC returns to pre-1980 levels, a metric commonly used to indicate a level of recovery from ODS-induced ozone depletion, is 2048 for midlatitudes and 2075 for Antarctic conditions based on the lifetimes from the SPARC assessment, which is about 2 and 4 yr, respectively, later than based on the lifetimes from the WMO (World Meteorological Organization) assessment of 2011. However, the uncertainty in this return to 1980 levels is much larger than the shift due to this change in lifetimes. The year EESC returns to pre-1980 levels ranges from 2039 to 2064 (95% confidence interval) for midlatitudes and from 2061 to 2105 for the Antarctic spring. The primary contribution to these ranges comes from the uncertainty in the lifetimes, with smaller contributions from uncertainties in other modeled parameters. The earlier years of the return estimates derived by the uncertainty analysis, i.e., 2039 for midlatitudes and 2061 for Antarctic spring, are comparable to a hypothetical scenario in which emissions of ODSs cease in 2014. The later end of the range, i.e., 2064 for midlatitudes and 2105 for Antarctic spring, can also be obtained by a scenario with an additional emission of about 7 Mt CFC-11 eq. (eq. - equivalent) in 2015, which is the same as about 2 times the projected cumulative anthropogenic emissions of all ODSs from 2014 to 2050, or about 12
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
NASA Astrophysics Data System (ADS)
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-05-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of
Analysis of Photometric Uncertainties in the OGLE-IV Galactic Bulge Microlensing Survey Data
NASA Astrophysics Data System (ADS)
Skowron, J.; Udalski, A.; Kozłowski, S.; Szymański, M. K.; Mróz, P.; Wyrzykowski, Ł.; Poleski, R.; Pietrukowicz, P.; Ulaczyk, K.; Pawlak, M.; Soszyński, I.
2016-01-01
We present a statistical assessment of both, observed and reported, photometric uncertainties in the OGLE-IV Galactic bulge microlensing survey data. This dataset is widely used for the detection of variable stars, transient objects, discovery of microlensing events, and characterization of the exo-planetary systems. Large collections of RR Lyr stars and Cepheids discovered by the OGLE project toward the Galactic bulge provide light curves based on this dataset. We describe the method of analysis, and provide the procedure, which can be used to update preliminary photometric uncertainties, provided with the light curves, to the ones reflecting the actual observed scatter at a given magnitude and for a given CCD detector of the OGLE-IV camera. This is of key importance for data modeling, in particular, for the correct estimation of the goodness of fit.
An uncertainty analysis of the PVT gauging method applied to sub-critical cryogenic propellant tanks
NASA Astrophysics Data System (ADS)
Van Dresar, Neil T.
2004-06-01
The PVT (pressure, volume, temperature) method of liquid quantity gauging in low-gravity is based on gas law calculations assuming conservation of pressurant gas within the propellant tank and the pressurant supply bottle. There is interest in applying this method to cryogenic propellant tanks since the method requires minimal additional hardware or instrumentation. To use PVT with cryogenic fluids, a non-condensable pressurant gas (helium) is required. With cryogens, there will be a significant amount of propellant vapor mixed with the pressurant gas in the tank ullage. This condition, along with the high sensitivity of propellant vapor pressure to temperature, makes the PVT method susceptible to substantially greater measurement uncertainty than is the case with less volatile propellants. A conventional uncertainty analysis is applied to example cases of liquid hydrogen and liquid oxygen tanks. It appears that the PVT method may be feasible for liquid oxygen. Acceptable accuracy will be more difficult to obtain with liquid hydrogen.
Idealization, uncertainty and heterogeneity : game frameworks defined with formal concept analysis.
Racovitan, M. T.; Sallach, D. L.; Decision and Information Sciences; Northern Illinois Univ.
2006-01-01
The present study begins with Formal Concept Analysis, and undertakes to demonstrate how a succession of game frameworks may, by design, address increasingly complex and interesting social phenomena. We develop a series of multi-agent exchange games, each of which incorporates an additional dimension of complexity. All games are based on coalition patterns in exchanges where diverse cultural markers provide a basis for trust and reciprocity. The first game is characterized by an idealized concept of trust. A second game framework introduces uncertainty regarding the reciprocity of prospective transactions. A third game framework retains idealized trust and uncertainty, and adds additional agent heterogeneity. Cultural markers are not equally salient in conferring or withholding trust, and the result is a richer transactional process.
NASA Astrophysics Data System (ADS)
McKinney, S. W.
2015-12-01
Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.
Huston, Thomas E; Farfán, Eduardo B; Bolch, W Emmett; Bolch, Wesley E
2003-11-01
An important aspect in model uncertainty analysis is the evaluation of input parameter sensitivities with respect to model outcomes. In previous publications, parameter uncertainties were examined for the ICRP-66 respiratory tract model. The studies were aided by the development and use of a computer code LUDUC (Lung Dose Uncertainty Code) which allows probabilities density functions to be specified for all ICRP-66 model input parameters. These density functions are sampled using Latin hypercube techniques with values subsequently propagated through the ICRP-66 model. In the present study, LUDUC has been used to perform a detailed parameter sensitivity analysis of the ICRP-66 model using input parameter density functions specified in previously published articles. The results suggest that most of the variability in the dose to a given target region is explained by only a few input parameters. For example, for particle diameters between 0.1 and 50 microm, about 50% of the variability in the total lung dose (weighted sum of target tissue doses) for 239PuO2 is due to variability in the dose to the alveolar-interstitial (AI) region. In turn, almost 90% of the variability in the dose to the AI region is attributable to uncertainties in only four parameters in the model: the ventilation rate, the AI deposition fraction, the clearance rate constant for slow-phase absorption of deposited material to the blood, and the clearance rate constant for particle transport from the AI2 to bb1 compartment. A general conclusion is that many input parameters do not significantly influence variability in final doses. As a result, future research can focus on improving density functions for those input variables that contribute the most to variability in final dose values. PMID:14571988
Uncertainty Analysis for a De-pressurised Loss of Forced Cooling Event of the PBMR Reactor
Jansen van Rensburg, Pieter A.; Sage, Martin G.
2006-07-01
This paper presents an uncertainty analysis for a De-pressurised Loss of Forced Cooling (DLOFC) event that was performed with the systems CFD (Computational Fluid Dynamics) code Flownex for the PBMR reactor. An uncertainty analysis was performed to determine the variation in maximum fuel, core barrel and reactor pressure vessel (RPV) temperature due to variations in model input parameters. Some of the input parameters that were varied are: thermo-physical properties of helium and the various solid materials, decay heat, neutron and gamma heating, pebble bed pressure loss, pebble bed Nusselt number and pebble bed bypass flows. The Flownex model of the PBMR reactor is a 2-dimensional axisymmetrical model. It is simplified in terms of geometry and some other input values. However, it is believed that the model adequately indicates the effect of changes in certain input parameters on the fuel temperature and other components during a DLOFC event. Firstly, a sensitivity study was performed where input variables were varied individually according to predefined uncertainty ranges and the results were sorted according to the effect on maximum fuel temperature. In the sensitivity study, only seven variables had a significant effect on the maximum fuel temperature (greater that 5 deg. C). The most significant are power distribution profile, decay heat, reflector properties and effective pebble bed conductivity. Secondly, Monte Carlo analyses were performed in which twenty variables were varied simultaneously within predefined uncertainty ranges. For a one-tailed 95% confidence level, the conservatism that should be added to the best estimate calculation of the maximum fuel temperature for a DLOFC was determined as 53 deg. C. This value will probably increase after some model refinements in the future. Flownex was found to be a valuable tool for uncertainly analyses, facilitating both sensitivity studies and Monte Carlo analyses. (authors)
NASA Astrophysics Data System (ADS)
Zio, Enrico; Apostolakis, George E.
1999-03-01
This paper illustrates an application of sensitivity and uncertainty analysis techniques within a methodology for evaluating environmental restoration technologies. The methodology consists of two main parts: the first part ("analysis") integrates a wide range of decision criteria and impact evaluation techniques in a framework that emphasizes and incorporates input from stakeholders in all aspects of the process. Its products are the rankings of the alternative options for each stakeholder using, essentially, expected utility theory. The second part ("deliberation") utilizes the analytical results of the "analysis" and attempts to develop consensus among the stakeholders in a session in which the stakeholders discuss and evaluate the analytical results. This paper deals with the analytical part of the approach and the uncertainty and sensitivity analyses that were carried out in preparation for the deliberative process. The objective of these investigations was that of testing the robustness of the assessments and of pointing out possible existing sources of disagreements among the participating stakeholders, thus providing insights for the successive deliberative process. Standard techniques, such as differential analysis, Monte Carlo sampling and a two-dimensional policy region analysis proved sufficient for the task.
Hyde, K M; Maier, H R; Colby, C B
2005-12-01
The choice among alternative water supply sources is generally based on the fundamental objective of maximising the ratio of benefits to costs. There is, however, a need to consider sustainability, the environment and social implications in regional water resources planning, in addition to economics. In order to achieve this, multi-criteria decision analysis (MCDA) techniques can be used. Various sources of uncertainty exist in the application of MCDA methods, including the selection of the MCDA method, elicitation of criteria weights and assignment of criteria performance values. The focus of this paper is on the uncertainty in the criteria weights. Sensitivity analysis can be used to analyse the effects of uncertainties associated with the criteria weights. Two existing sensitivity methods are described in this paper and a new distance-based approach is proposed which overcomes limitations of these methods. The benefits of the proposed approach are the concurrent alteration of the criteria weights, the applicability of the method to a range of MCDA techniques and the identification of the most critical criteria weights. The existing and proposed methods are applied to three case studies and the results indicate that simultaneous consideration of the uncertainty in the criteria weights should be an integral part of the decision making process.
NASA Astrophysics Data System (ADS)
Elishakoff, I.; Sarlin, N.
2016-06-01
In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Estimating uncertainty in policy analysis: health effects from inhaled sulfur oxides
Amaral, D.A.L.
1983-01-01
This study presents methods for the incorporation of uncertainty into quanitative analysis of the problem of estimating health risks from coal-fired power plants. Probabilistic long-range models of sulfur material balance and sets of plume trajectories are combined to produce probabilistic estimates of population exposure to sulfur air pollution for the addition of a hypothetical coal-burning power plant in the Ohio River Valley. In another segment, the change in population exposure which might occur if ambient sulfate were to be reduced everywhere in the northeastern United States is calculated. A third case is made up of a set of hypothetical urban and rural scenarios representing typical northeastern situations. Models of health impacts obtained through the elicitation of subjective expert judgment are applied to each of these population exposure estimates. Seven leading experts in the field of sulfur air pollution and health participated, yielding five quantitative models for morbidity and/or mortality effects from human exposure to ambient sulfate. In each case analyzed, the predictions based on probability distributions provided by the experts spanned several orders of magnitude, including some predictions of zero effects and some of up to a few percent of the total morality. It is concluded that uncertainty about whether sulfate has adverse effects dominates the scientific uncertainty about the atmospheric processes which generate and transport this pollutant.
Flood damage analysis: uncertainties for first floor elevation yielded from LiDAR data
NASA Astrophysics Data System (ADS)
Bodoque, Jose Maria; Aroca-Jimenez, Estefania; Guardiola-Albert, Carolina; Eguibar, Miguel Angel
2016-04-01
The use of high resolution ground-base light detection and ranging (LiDAR) datasets provide the spatial density and vertical precisión to obtain Digital Elevation Models (DEMs) highly accurate. As a result, reliability of flood damage analysis has been improved significantly, as accuracy of hydrodinamic model is increased. Additionally, an important error reduction also takes place in estimating first floor elevation, which is a critical parameter to determine structural and content damages in buildings. However, justlike any discrete measurement technique, LiDAR data contain object space ambiguities, especially in urban areas where the presence of buildings and the floodplain determines a highly complex landscape that is largely corrected by using data ancillory information based on breaklines. Here, we provide an uncertainty assessment based on: a) improvement of DEMs to be used in flood damage based on adding breaklines as ancillary information; b) geostatistical estimation of errors in DEMs; c) implementing a 2D hydrodynamic model considering the 500 yr flood return period; and d) determining first floor elevation uncertainty. As main conclusion of this study, worth to outline the need of processing raw LiDAR in order to generate efficient and high-quality DEMs that minimize the uncertainty of determining first-floor elevation and, as a result, reliability of flood damage assessment is increased.
NASA Astrophysics Data System (ADS)
Choi, Sungyeol; Park, Jaeyeong; Hoover, Robert O.; Phongikaroon, Supathorn; Simpson, Michael F.; Kim, Kwang-Rag; Hwang, Il Soon
2011-09-01
This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 h of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.
Tahsin, Subrina; Chang, Ni-Bin
2016-02-01
Stormwater wet detention ponds have been a commonly employed best management practice for stormwater management throughout the world for many years. In the past, the trophic state index values have been used to evaluate seasonal changes in water quality and rank lakes within a region or between several regions; yet, to date, there is no similar index for stormwater wet detention ponds. This study aimed to develop a new multivariate trophic state index (MTSI) suitable for conducting a rapid eutrophication assessment of stormwater wet detention ponds under uncertainty with respect to three typical physical and chemical properties. Six stormwater wet detention ponds in Florida were selected for demonstration of the new MTSI with respect to total phosphorus (TP), total nitrogen (TN), and Secchi disk depth (SDD) as cognitive assessment metrics to sense eutrophication potential collectively and inform the environmental impact holistically. Due to the involvement of multiple endogenous variables (i.e., TN, TP, and SDD) for the eutrophication assessment simultaneously under uncertainty, fuzzy synthetic evaluation was applied to first standardize and synchronize the sources of uncertainty in the decision analysis. The ordered probit regression model was then formulated for assessment based on the concept of MTSI with the inputs from the fuzzy synthetic evaluation. It is indicative that the severe eutrophication condition is present during fall, which might be due to frequent heavy summer storm events contributing to high-nutrient inputs in these six ponds. PMID:26733470
Uncertainty Modeling for Robustness Analysis of Control Upset Prevention and Recovery Systems
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Khong, Thuan H.; Shin, Jong-Yeob; Kwatny, Harry; Chang, Bor-Chin; Balas, Gary J.
2005-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems (developed for failure detection, identification, and reconfiguration, as well as upset recovery) need to be evaluated over broad regions of the flight envelope and under extreme flight conditions, and should include various sources of uncertainty. However, formulation of linear fractional transformation (LFT) models for representing system uncertainty can be very difficult for complex parameter-dependent systems. This paper describes a preliminary LFT modeling software tool which uses a matrix-based computational approach that can be directly applied to parametric uncertainty problems involving multivariate matrix polynomial dependencies. Several examples are presented (including an F-16 at an extreme flight condition, a missile model, and a generic example with numerous crossproduct terms), and comparisons are given with other LFT modeling tools that are currently available. The LFT modeling method and preliminary software tool presented in this paper are shown to compare favorably with these methods.
NASA Astrophysics Data System (ADS)
Mesgouez, A.; Buis, S.; Ruy, S.; Lefeuve-Mesgouez, G.
2014-05-01
The determination of the hydraulic properties of heterogeneous soils or porous media remains challenging. In the present study, we focus on determining the effective properties of heterogeneous porous media at the Darcy scale with an analysis of their uncertainties. Preliminary, experimental measurements of the hydraulic properties of each component of the heterogeneous medium are obtained. The properties of the effective medium, representing an equivalent homogeneous material, are determined numerically by simulating a water flow in a three-dimensional representation of the heterogeneous medium, under steady-state scenarios and using its component properties. One of the major aspects of this study is to take into account the uncertainties of these properties in the computation and evaluation of the effective properties. This is done using a bootstrap method. Numerical evaporation experiments are conducted both on the heterogeneous and on the effective homogeneous materials to evaluate the effectiveness of the proposed approach. First, the impact of the uncertainties of the component properties on the simulated water matric potential is found to be high for the heterogeneous material configuration. Second, it is shown that the strategy developed herein leads to a reduction of this impact. Finally, the adequacy between the mean of the simulations for the two configurations confirms the suitability of the homogenization approach, even in the case of dynamic scenarios. Although it is applied to green roof substrates, a two-component media composed of bark compost and pozzolan used in the construction of buildings, the methodology proposed in this study is generic.
Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang
2011-09-01
This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.
Analysis of parameter uncertainty of a flow and quality stormwater model.
Dotto, C B S; Deletic, A; Fletcher, T D
2009-01-01
Uncertainty is intrinsic to all monitoring programs and all models. It cannot realistically be eliminated, but it is necessary to understand the sources of uncertainty, and their consequences on models and decisions. The aim of this paper is to evaluate uncertainty in a flow and water quality stormwater model, due to the model parameters and the availability of data for calibration and validation of the flow model. The MUSIC model, widely used in Australian stormwater practice, has been investigated. Frequentist and Bayesian methods were used for calibration and sensitivity analysis, respectively. It was found that out of 13 calibration parameters of the rainfall/runoff model, only two matter (the model results were not sensitive to the other 11). This suggests that the model can be simplified without losing its accuracy. The evaluation of the water quality models proved to be much more difficult. For the specific catchment and model tested, we argue that for rainfall/runoff, 6 months of data for calibration and 6 months of data for validation are required to produce reliable predictions. Further work is needed to make similar recommendations for modelling water quality. PMID:19657167
A novel risk-based analysis for the production system under epistemic uncertainty
NASA Astrophysics Data System (ADS)
Khalaj, Mehran; Khalaj, Fereshteh; Khalaj, Amineh
2013-11-01
Risk analysis of production system, while the actual and appropriate data is not available, will cause wrong system parameters prediction and wrong decision making. In uncertainty condition, there are no appropriate measures for decision making. In epistemic uncertainty, we are confronted by the lack of data. Therefore, in calculating the system risk, we encounter vagueness that we have to use more methods that are efficient in decision making. In this research, using Dempster-Shafer method and risk assessment diagram, the researchers have achieved a better method of calculating tools failure risk. Traditional statistical methods for recognizing and evaluating systems are not always appropriate, especially when enough data is not available. The goal of this research was to present a more modern and applied method in real world organizations. The findings of this research were used in a case study, and an appropriate framework and constraint for tools risk were provided. The research has presented a hopeful concept for the calculation of production systems' risk, and its results show that in uncertainty condition or in case of the lack of knowledge, the selection of an appropriate method will facilitate the decision-making process.
Chen, Cao-Cao; Liu, Chun-Lan; Li, Zheng; Wang, Hai-Hua; Zhang, Yan; Wang, Lu
2012-01-01
In order to improve the accurate evaluation of CH4 emissions from municipal solid waste landfill in Beijing, FOD-model and Monte Carlo method were conducted. Based on local data, national data and experts' experience, the uncertainty of FOD-model and parameters' sensitivity analysis were identified. And we quantified effect of various parameters on model output. The results showed that 95% probability distribution range of CH4 emission from landfill was (11.8-19.76) x 10(4) t x a(-1) in Beijing. The mean value was 15.58 x 10(4)t x a(-1) with uncertainty range of -24.26% - 26.83%. Among all the parameters MCF (after 2000) showed the greatest impact on landfill CH4 emission in 2008, and its contribution to the uncertainty of emission result was 41.4%. This research can improve the assessment accuracy and quality on CH4 emission from municipal solid waste landfill in Beijing, as providing scientific basis to improve the landfill greenhouse gas inventory and data collection.
Li, W B; Hoeschen, C
2010-01-01
Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of (18)F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. PMID:20185457
Tahsin, Subrina; Chang, Ni-Bin
2016-02-01
Stormwater wet detention ponds have been a commonly employed best management practice for stormwater management throughout the world for many years. In the past, the trophic state index values have been used to evaluate seasonal changes in water quality and rank lakes within a region or between several regions; yet, to date, there is no similar index for stormwater wet detention ponds. This study aimed to develop a new multivariate trophic state index (MTSI) suitable for conducting a rapid eutrophication assessment of stormwater wet detention ponds under uncertainty with respect to three typical physical and chemical properties. Six stormwater wet detention ponds in Florida were selected for demonstration of the new MTSI with respect to total phosphorus (TP), total nitrogen (TN), and Secchi disk depth (SDD) as cognitive assessment metrics to sense eutrophication potential collectively and inform the environmental impact holistically. Due to the involvement of multiple endogenous variables (i.e., TN, TP, and SDD) for the eutrophication assessment simultaneously under uncertainty, fuzzy synthetic evaluation was applied to first standardize and synchronize the sources of uncertainty in the decision analysis. The ordered probit regression model was then formulated for assessment based on the concept of MTSI with the inputs from the fuzzy synthetic evaluation. It is indicative that the severe eutrophication condition is present during fall, which might be due to frequent heavy summer storm events contributing to high-nutrient inputs in these six ponds.
Jardine, Timothy D; Kidd, Karen A; Fisk, Aaron T
2006-12-15
Stable isotope analysis (SIA) has become a powerful tool for ecotoxicologists to study dietary exposure and biomagnification of contaminants in wild animal populations. The use of SIA in ecotoxicology continues to expand and, while much more is known about the mechanisms driving patterns of isotopic ratios in consumers, there remain several considerations or sources of uncertainty that can influence interpretation of data from field studies. We outline current uses of SIA in ecotoxicology, including estimating the importance of dietary sources of carbon and their application in biomagnification studies, and we present six main considerations or sources of uncertainty associated with the approach: (1) unequal diet-tissue stable isotope fractionation among species, (2) variable diet-tissue stable isotope fractionation within a given species, (3) different stable isotope ratios in different tissues of the animal, (4) fluctuating baseline stable isotope ratios across systems, (5) the presence of true omnivores, and (6) movement of animals and nutrients between food webs. Since these considerations or sources of uncertainty are difficult to assess in field studies, we advocate that researchers consider the following in designing ecotoxicological research and interpreting results: assess and utilize variation in stable isotope diet-tissue fractionation among animal groups available in the literature; determine stable isotope ratios in multiple tissues to provide a temporal assessment of feeding; adequately characterize baseline isotope ratios; utilize stomach contents when possible; and assess and integrate life history of study animals in a system.
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Uncertainty analysis for an updated dose assessment for a US nuclear test site: Bikini Atoll
Bogen, K.T.; Conrado, C.L.; Robison, W.L.
1995-11-01
A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods are unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.
NASA Astrophysics Data System (ADS)
You, Jiong; Pei, Zhiyuan
2015-01-01
With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Gao, D.; Milford, J.B.; Stockwell, W.R.
1996-04-01
This report describes the results of a detailed analysis of uncertainties in the RADM2 chemical mechanism, which was developed by Stockwell et al. (1990) for use in urban and regional scale models of the formation and transport of ozone and other photochemical air pollutants. The uncertainty analysis was conducted for box model simulations of chemical conditions representing summertime smog episodes in polluted rural and urban areas. Estimated uncertainties in the rate parameters and product yields of the mechanism were propagated through the simulations using Monte Carlo analysis with a Latin Hypercube Sampling scheme. Uncertainty estimates for the mechanism parameters were compiled from published reviews, supplemented as necessary by original estimates. Correlations between parameters were considered in the analysis as appropriate.
Perspectives Gained in an Evaluation of Uncertainty, Sensitivity, and Decision Analysis Software
Davis, F.J.; Helton, J.C.
1999-02-24
The following software packages for uncertainty, sensitivity, and decision analysis were reviewed and also tested with several simple analysis problems: Crystal Ball, RiskQ, SUSA-PC, Analytica, PRISM, Ithink, Stella, LHS, STEPWISE, and JMP. Results from the review and test problems are presented. The study resulted in the recognition of the importance of four considerations in the selection of a software package: (1) the availability of an appropriate selection of distributions, (2) the ease with which data flows through the input sampling, model evaluation, and output analysis process, (3) the type of models that can be incorporated into the analysis process, and (4) the level of confidence in the software modeling and results.
NASA Astrophysics Data System (ADS)
Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.
2012-12-01
Residential and commercial buildings are a major source of energy consumption and carbon dioxide emissions in the United States, accounting for 41% of energy consumption and 40% of carbon emissions in 2011. Integrated assessment models (IAMs) historically have been used to estimate the impact of energy consumption on greenhouse gas emissions at the national and international level. Increasingly they are being asked to evaluate mitigation and adaptation policies that have a subnational dimension. In the United States, for example, building energy codes are adopted and enforced at the state and local level. Adoption of more efficient appliances and building equipment is sometimes directed or actively promoted by subnational governmental entities for mitigation or adaptation to climate change. The presentation reports on new example results from the Global Change Assessment Model (GCAM) IAM, one of a flexibly-coupled suite of models of human and earth system interactions known as the integrated Regional Earth System Model (iRESM) system. iRESM can evaluate subnational climate policy in the context of the important uncertainties represented by national policy and the earth system. We have added a 50-state detailed U.S. building energy demand capability to GCAM that is sensitive to national climate policy, technology, regional population and economic growth, and climate. We are currently using GCAM in a prototype stakeholder-driven uncertainty characterization process to evaluate regional climate mitigation and adaptation options in a 14-state pilot region in the U.S. upper Midwest. The stakeholder-driven decision process involves several steps, beginning with identifying policy alternatives and decision criteria based on stakeholder outreach, identifying relevant potential uncertainties, then performing sensitivity analysis, characterizing the key uncertainties from the sensitivity analysis, and propagating and quantifying their impact on the relevant decisions. In the
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, Mattia; Pardini, Federica; Spanu, Antonio; Neri, Augusto; Vittoria Salvetti, Maria
2015-04-01
Volcanic ash clouds represent a major hazard for populations living nearby volcanic centers producing a risk for humans and a potential threat to crops, ground infrastructures, and aviation traffic. Lagrangian particle dispersal models are commonly used for tracking ash particles emitted from volcanic plumes and transported under the action of atmospheric wind fields. In this work, we present the results of an uncertainty propagation analysis applied to volcanic ash dispersal from weak plumes with specific focus on the uncertainties related to the grain-size distribution of the mixture. To this aim, the Eulerian fully compressible mesoscale non-hydrostatic model WRF was used to generate the driving wind, representative of the atmospheric conditions occurring during the event of November 24, 2006 at Mt. Etna. Then, the Lagrangian particle model LPAC (de' Michieli Vitturi et al., JGR 2010) was used to simulate the transport of mass particles under the action of atmospheric conditions. The particle motion equations were derived by expressing the Lagrangian particle acceleration as the sum of the forces acting along its trajectory, with drag forces calculated as a function of particle diameter, density, shape and Reynolds number. The simulations were representative of weak plume events of Mt. Etna and aimed to quantify the effect on the dispersal process of the uncertainty in the particle sphericity and in the mean and variance of a log-normal distribution function describing the grain-size of ash particles released from the eruptive column. In order to analyze the sensitivity of particle dispersal to these uncertain parameters with a reasonable number of simulations, and therefore with affordable computational costs, response surfaces in the parameter space were built by using the generalized polynomial chaos technique. The uncertainty analysis allowed to quantify the most probable values, as well as their pdf, of the number of particles as well as of the mean and
A Grounded Analysis of Career Uncertainty Perceived by College Students in Taiwan
ERIC Educational Resources Information Center
Tien, Hsiu-Lan Shelley; Lin, Chia-Huei; Chen, Shu-Chi
2005-01-01
The authors examined career-related uncertainties perceived by college students in Taiwan. Five hundred thirty-two Taiwanese students responded to a free-response instrument containing 3 questions related to career uncertainties: (1) the sources of career uncertainty; (2) the experiences at the moment of feeling uncertainty; and (3) coping…
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.
[Uncertainty analysis of groundwater protection and control zoning in Beijing plain].
Lu, Yan; He, Jiang-Tao; Wang, Jun-Jie; Liu, Li-Ya; Zhang, Xiao-Liang
2012-09-01
Groundwater pollution prevention mapping has important meaning to groundwater protection, pollution prevention and effective management. A mapping method was built through combining groundwater pollution risk assessment, groundwater value and wellhead protection area zoning. To make the method more accurate, two series of uncertainty analysis were performed and discussed. One was performed by changing the weights of the toxicity, mobility and degradation of pollutants, and the other was by changing the weights of groundwater pollution risk, groundwater value and wellhead protection area zoning. The results showed that the weights of groundwater pollution risk, groundwater value and wellhead protection area zoning were more sensitive than the weights of toxicity, mobility and degradation of pollutants.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
NASA Astrophysics Data System (ADS)
Lin, Z.; Radcliffe, D. E.; Doherty, J.
2004-12-01
monthly flow produced a very good fit to the measured data. Nash and Sutcliffe coefficients for daily and monthly flow over the calibration period were 0.60 and 0.86, respectively; they were 0.61 and 0.87 respectively over the validation period. Regardless of the level of model-to-measurement fit, nonuniqueness of the optimal parameter values renders the necessity of uncertainty analysis for model prediction. The nonlinear prediction uncertainty analysis showed that cautions must be exercised when using the SWAT model to predict instantaneous peak flows. The PEST (Parameter Estimation) free software was used to conduct the two-stage automatic calibration and prediction uncertainty analysis of the SWAT model.
Transparent tools for uncertainty analysis in high level waste disposal facilities safety
Lemos, Francisco Luiz de; Helmuth, Karl-Heinz; Sullivan, Terry
2007-07-01
In this paper some results of a further development of a technical cooperation project, initiated in 2004, between the CDTN/CNEN, The Brazilian National Nuclear Energy Commission, and the STUK, The Finnish Radiation and Nuclear Safety Authority, are presented. The objective of this project is to study applications of fuzzy logic, and artificial intelligence methods, on uncertainty analysis of high level waste disposal facilities safety assessment. Uncertainty analysis is an essential part of the study of the complex interactions of the features, events and processes, which will affect the performance of the HLW disposal system over the thousands of years in the future. Very often the development of conceptual and computational models requires simplifications and selection of over conservative parameters that can lead to unrealistic results. These results can mask the existing uncertainties which, consequently, can be an obstacle to a better understanding of the natural processes. A correct evaluation of uncertainties and their rule on data interpretation is an important step for the improvement of the confidence in the calculations and public acceptance. This study focuses on dissolution (source), solubility and sorption (sink) as key processes for determination of release and migration of radionuclides. These factors are affected by a number of parameters that characterize the near and far fields such as pH; temperature; redox conditions; and other groundwater properties. On the other hand, these parameters are also consequence of other processes and conditions such as water rock interaction; pH and redox buffering. Fuzzy logic tools have been proved to be suited for dealing with interpretation of complex, and some times conflicting, data. For example, although some parameters, such as pH and carbonate, are treated as independent, they have influence in each other and on the solubility. It is used the technique of fuzzy cognitive mapping is used for analysis of
Sensitivity Analysis and Uncertainty Propagation in a General-Purpose Thermal Analysis Code
Blackwell, Bennie F.; Dowding, Kevin J.
1999-08-04
Methods are discussed for computing the sensitivity of field variables to changes in material properties and initial/boundary condition parameters for heat transfer problems. The method we focus on is termed the ''Sensitivity Equation Method'' (SEM). It involves deriving field equations for sensitivity coefficients by differentiating the original field equations with respect to the parameters of interest and numerically solving the resulting sensitivity field equations. Uncertainty in the model parameters are then propagated through the computational model using results derived from first-order perturbation theory; this technique is identical to the methodology typically used to propagate experimental uncertainty. Numerical results are presented for the design of an experiment to estimate the thermal conductivity of stainless steel using transient temperature measurements made on prototypical hardware of a companion contact conductance experiment. Comments are made relative to extending the SEM to conjugate heat transfer problems.
Identifying the Potential Loss of Monitoring Wells Using an Uncertainty Analysis
Freedman, Vicky L.; Waichler, Scott R.; Cole, Charles R.; Vermeul, Vince R.; Bergeron, Marcel P.
2005-11-01
From the mid-1940s through the 1980s, large volumes of wastewater were discharged at the Hanford Site in southeastern Washington State, causing a large-scale rise (in excess of 20 m) in the water table. When wastewater discharges ceased in 1988, groundwater mounds began to dissipate. This caused a large number of wells to go dry and has made it difficult to monitor contaminant plume migration. To identify the wells that could potentially go dry, a first order uncertainty analysis was performed using a three-dimensional, finite element code (CFEST) coupled with UCODE, a nonlinear parameter estimation code. The analysis was conducted in four steps. First, key parameter values were identified by calibrating to historical hydraulic head data. Second, the model was tested for linearity, a strict requirement for representing output uncertainty. Third, results from the calibration period were used to verify model predictions by comparing monitoring wells? wet/dry status with field data. In the final step, predictions on the number and locations of dry wells were made through the year 2048. A non-physically based model that extrapolated trends at each individual well was also tested as a predictor of a well?s wet/dry status. Results demonstrated that when uncertainty in both parameter estimates and measurement error was considered, the CFEST-based model successfully predicted the majority of dry wells, outperforming the trend model. Predictions made through the year 2048 identified approximately 50% of the wells in the monitoring well network are likely to go dry, which can aid in decisions for their replacement.
Fission Spectrum Related Uncertainties
G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores
2007-10-01
The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.
NASA Astrophysics Data System (ADS)
Bacon, D. H.; Keating, E. H.; Viswanathan, H. S.; Dai, Z.
2011-12-01
Accurate prediction of the impact of leaking CO2 on groundwater quality is limited by the complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions. As a result there is a high uncertainty associated with any predictions, hampering monitoring plans, interpretation of the monitoring results, and mitigation plans for a given site. Many physical and geochemical characteristics will dictate a drinking water aquifer's response to a CO2 leak. As a part of the National Risk Assessment Program (NRAP), funded by the U.S. Department of Energy, scientists at Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) have collaborated on the development of a 3D heterogeneous model of the Edwards Aquifer in Texas to examine the impacts of CO2 leakage into an unconfined, carbonate aquifer. Using the same base case model, LANL has focused on uncertainty quantification (UQ) of the aquifer's hydraulic properties, whereas PNNL has examined the impact of uncertainty related to geochemical parameters. This abstract describes PNNL's work on geochemical UQ. The uncertainty analysis looks at the impact on several model outputs, including CO2 leakage rate from the water table, mean pH value, and pH<6.5 volume fraction. The uncertainty parameters are the leakage rate into the aquifer, the composition of the aquifer limestone, and the dissolution/precipitation rate for the limestone, which ranges over two orders of magnitude from laboratory-measured values for calcite to those for disordered dolomite. We also examine the impact on drinking water quality, specifically TDS, from leakage of brine from the underlying formation, forced upwards by increased pressure due to CO2 injection. To conduct these simulations we use STOMP-CO2-R, which is a multiphase flow simulator, coupled with the reactive transport module ECKEChem, developed at PNNL to simulate CO2 sequestration in deep saline formations and the associated
NASA Astrophysics Data System (ADS)
Adams, R.; Costelloe, J. F.; Western, A. W.; George, B.
2013-10-01
An improved understanding of water balances of rivers is fundamental in water resource management. Effective use of a water balance approach requires thorough identification of sources of uncertainty around all terms in the analysis and can benefit from additional, independent information that can be used to interpret the accuracy of the residual term of a water balance. We use a Monte Carlo approach to estimate a longitudinal river channel water balance and to identify its sources of uncertainty for a regulated river in south-eastern Australia, assuming that the residual term of this water balance represents fluxes between groundwater and the river. Additional information from short term monitoring of ungauged tributaries and groundwater heads is used to further test our confidence in the estimates of error and variance for the major components of this water balance. We identify the following conclusions from the water balance analysis. First, improved identification of the major sources of error in consecutive reaches of a catchment can be used to support monitoring infrastructure design to best reduce the largest sources of error in a water balance. Second, estimation of ungauged inflow using rainfall-runoff modelling is sensitive to the representativeness of available gauged data in characterising the flow regime of sub-catchments along a perennial to intermittent continuum. Lastly, comparison of temporal variability of stream-groundwater head difference data and a residual water balance term provides an independent means of assessing the assumption that the residual term represents net stream-groundwater fluxes.
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Petrenko, Maksym; Leptoukh, Gregory
2010-01-01
Among the known atmospheric constituents, aerosols represent the greatest uncertainty in climate research. Although satellite-based aerosol retrieval has practically become routine, especially during the last decade, there is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus and the inconsistencies are not well characterized and understood ', there will be no way of developing reliable climate data records from satellite aerosol measurements. Fortunately, the most globally representative well-calibrated ground-based aerosol measurements corresponding to the satellite-retrieved products are available from the Aerosol Robotic Network (AERONET). To adequately utilize the advantages offered by this vital resource,., an online Multi-sensor Aerosol Products Sampling System (MAPSS) was recently developed. The aim of MAPSS is to facilitate detailed comparative analysis of satellite aerosol measurements from different sensors (Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP) based on the collocation of these data products over AERONET stations. In this presentation, we will describe the strategy of the MAPSS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainty analysis of aerosol products from multiple satellite sensors.
A seismic hazard uncertainty analysis for the New Madrid seismic zone
Cramer, C.H.
2001-01-01
A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.
Zacharof, A I; Butler, A P
2004-01-01
A mathematical model simulating the hydrological and biochemical processes occurring in landfilled waste is presented and demonstrated. The model combines biochemical and hydrological models into an integrated representation of the landfill environment. Waste decomposition is modelled using traditional biochemical waste decomposition pathways combined with a simplified methodology for representing the rate of decomposition. Water flow through the waste is represented using a statistical velocity model capable of representing the effects of waste heterogeneity on leachate flow through the waste. Given the limitations in data capture from landfill sites, significant emphasis is placed on improving parameter identification and reducing parameter requirements. A sensitivity analysis is performed, highlighting the model's response to changes in input variables. A model test run is also presented, demonstrating the model capabilities. A parameter perturbation model sensitivity analysis was also performed. This has been able to show that although the model is sensitive to certain key parameters, its overall intuitive response provides a good basis for making reasonable predictions of the future state of the landfill system. Finally, due to the high uncertainty associated with landfill data, a tool for handling input data uncertainty is incorporated in the model's structure. It is concluded that the model can be used as a reasonable tool for modelling landfill processes and that further work should be undertaken to assess the model's performance. PMID:15120429
Uncertainty Analysis for the Miniaturized Laser Heterodyne Radiometer (mini-LHR)
NASA Technical Reports Server (NTRS)
Clarke, G. B.; Wilson E. L.; Miller, J. H.; Melroy, H. R.
2014-01-01
Presented here is a sensitivity analysis for the miniaturized laser heterodyne radiometer (mini-LHR). This passive, ground-based instrument measures carbon dioxide (CO2) in the atmospheric column and has been under development at NASA/GSFC since 2009. The goal of this development is to produce a low-cost, easily-deployable instrument that can extend current ground measurement networks in order to (1) validate column satellite observations, (2) provide coverage in regions of limited satellite observations, (3) target regions of interest such as thawing permafrost, and (4) support the continuity of a long-term climate record. In this paper an uncertainty analysis of the instrument performance is presented and compared with results from three sets of field measurements. The signal-to-noise ratio (SNR) and corresponding uncertainty for a single scan are calculated to be 329.4+/-1.3 by deploying error propagation through the equation governing the SNR. Reported is an absorbance noise of 0.0024 for 6 averaged scans of field data, for an instrument precision of approximately 0.2 ppmv for CO2.
Gualdrini, G; Tanner, R J; Agosteo, S; Pola, A; Bedogni, R; Ferrari, P; Lacoste, V; Bordy, J-M; Chartier, J-L; de Carlan, L; Gomez Ros, J-M; Grosswendt, B; Kodeli, I; Price, R A; Rollet, S; Schultz, F; Siebert, B; Terrissol, M; Zankl, M
2008-01-01
Within the scope of CONRAD (A Coordinated Action for Radiation Dosimetry) Work Package 4 on Computational Dosimetry jointly collaborated with the other research actions on internal dosimetry, complex mixed radiation fields at workplaces and medical staff dosimetry. Besides these collaborative actions, WP4 promoted an international comparison on eight problems with their associated experimental data. A first set of three problems, the results of which are herewith summarised, dealt only with the expression of the stochastic uncertainties of the results: the analysis of the response function of a proton recoil telescope detector, the study of a Bonner sphere neutron spectrometer and the analysis of the neutron spectrum and dosimetric quantity H(p)(10) in a thermal neutron facility operated by IRSN Cadarache (the SIGMA facility). A second paper will summarise the results of the other five problems which dealt with the full uncertainty budget estimate. A third paper will present the results of a comparison on in vivo measurements of the (241)Am bone-seeker nuclide distributed in the knee. All the detailed papers will be presented in the WP4 Final Workshop Proceedings.
An improved method of fuzzy support degree based on uncertainty analysis
NASA Astrophysics Data System (ADS)
Huang, Yuan; Wu, Jing; Wu, Lihua; Sheng, Weidong
2015-10-01
Most multisensor association algorithms based on fuzzy set theory forms the opinion of fuzzy proposition using a simple triangular function. It does not take the randomness of measurements into account. Otherwise, the variance of sensors supposed to be known in the triangular function, but in fact the exact variance is difficult to acquire. This paper discuss about two situations with known and unknown variance of sensors. First, with known variance and known mean. This paper proposes a method, which use the probability ratio to calculate the fuzzy support degree. The interaction between the two objects is considered. Second, with unknown variance and known mean value, we replace the sample mean in the gray auto correlation function with the real sensor mean value to analysis the uncertainty which is the correlation coefficient between targets and measurements actually. In this way, it can deal with the case of small sample. Finally, form the opinion about the fuzzy proposition in terms of weighting the opinion of all the sensors based on the result of uncertainty analysis. Sufficient simulations on some typical scenarios are performed, and the results indicate that the method presented is efficient.
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.; Oedegaard-Jensen, A.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, and to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)
NASA Astrophysics Data System (ADS)
Kinkeldey, Christoph; Schiewe, Jochen; Gerstmann, Henning; Götze, Christian; Kit, Oleksandr; Lüdeke, Matthias; Taubenböck, Hannes; Wurm, Michael
2015-11-01
Extensive research on geodata uncertainty has been conducted in the past decades, mostly related to modeling, quantifying, and communicating uncertainty. But findings on if and how users can incorporate this information into spatial analyses are still rare. In this paper we address these questions with a focus on land cover change analysis. We conducted semi-structured interviews with three expert groups dealing with change analysis in the fields of climate research, urban development, and vegetation monitoring. During the interviews we used a software prototype to show change scenarios that the experts had analyzed before, extended by visual depiction of uncertainty related to land cover change. This paper describes the study, summarizes results, and discusses findings as well as the study method. Participants came up with several ideas for applications that could be supported by uncertainty, for example, identification of erroneous change, description of change detection algorithm characteristics, or optimization of change detection parameters. Regarding the aspect of reasoning with uncertainty in land cover change data the interviewees saw potential in better-informed hypotheses and insights about change. Communication of uncertainty information to users was seen as critical, depending on the users' role and expertize. We judge semi-structured interviews to be suitable for the purpose of this study and emphasize the potential of qualitative methods (workshops, focus groups etc.) for future uncertainty visualization studies.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both
NASA Astrophysics Data System (ADS)
Candela, Angela; Tito Aronica, Giuseppe
2014-05-01
Floods are a global problem and are considered the most frequent natural disaster world-wide. Many studies show that the severity and frequency of floods have increased in recent years and underline the difficulty to separate the effects of natural climatic changes and human influences as land management practices, urbanization etc. Flood risk analysis and assessment is required to provide information on current or future flood hazard and risks in order to accomplish flood risk mitigation, to propose, evaluate and select measures to reduce it. Both components of risk can be mapped individually and are affected by multiple uncertainties as well as the joint estimate of flood risk. Major sources of uncertainty include statistical analysis of extremes events, definition of hydrological input, channel and floodplain topography representation, the choice of effective hydraulic roughness coefficients. The classical procedure to estimate flood discharge for a chosen probability of exceedance is to deal with a rainfall-runoff model associating to risk the same return period of original rainfall, in accordance with the iso-frequency criterion. Alternatively, a flood frequency analysis to a given record of discharge data is applied, but again the same probability is associated to flood discharges and respective risk. Moreover, since flood peaks and corresponding flood volumes are variables of the same phenomenon, they should be, directly, correlated and, consequently, multivariate statistical analyses must be applied. This study presents an innovative approach to obtain flood hazard maps where hydrological input (synthetic flood design event) to a 2D hydraulic model has been defined by generating flood peak discharges and volumes from: a) a classical univariate approach, b) a bivariate statistical analysis, through the use of copulas. The univariate approach considers flood hydrographs generation by an indirect approach (rainfall-runoff transformation using input rainfall
Hartini, Entin Andiwijayakusuma, Dinan
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter
2016-02-02
This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables. PMID:16532737
Analysis of uncertainties in α -particle optical-potential assessment below the Coulomb barrier
NASA Astrophysics Data System (ADS)
Avrigeanu, V.; Avrigeanu, M.
2016-08-01
Background: Recent high-precision measurements of α -induced reaction data below the Coulomb barrier have pointed out questions about the α -particle optical-model potential (OMP) which are still unanswered within various mass ranges. Purpose: The applicability of previous optical potential and eventual uncertainties and/or systematic errors of the OMP assessment at low energies can be further considered on this basis. Method: Nuclear model parameters based on the analysis of recent independent data, particularly γ -ray strength functions, have been involved within statistical model calculation of the (α ,x ) reaction cross sections. Results: The above-mentioned potential provides a consistent description of the recent α -induced reaction data with no empirical rescaling factors of the γ and/or nucleon widths. Conclusions: A suitable assessment of α -particle optical potential below the Coulomb barrier should involve the statistical-model parameters beyond this potential on the basis of a former analysis of independent data.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
NASA Astrophysics Data System (ADS)
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Voxel-based statistical analysis of uncertainties associated with deformable image registration
NASA Astrophysics Data System (ADS)
Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang
2013-09-01
Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive.
Uncertainties on α _S in the MMHT2014 global PDF analysis and implications for SM predictions
NASA Astrophysics Data System (ADS)
Harland-Lang, L. A.; Martin, A. D.; Motylinski, P.; Thorne, R. S.
2015-09-01
We investigate the uncertainty in the strong coupling α _S(MZ2) when allowing it to be a free parameter in the recent MMHT global analyses of deep-inelastic and related hard scattering data that was undertaken to determine the parton distribution functions (PDFs) of the proton. The analysis uses the standard framework of leading twist fixed-order collinear factorisation in the {overline{MS}} scheme. We study the constraints on α _S(MZ2) coming from individual data sets by repeating the NNLO and NLO fits spanning the range 0.108 to 0.128 in units of 0.001, making all PDFs sets available. The inclusion of the cross section for inclusive tbar{t} production allows us to explore the correlation between the mass m_t of the top quark and α _S(MZ2). We find that the best-fit values are α _S(MZ2)=0.1201± 0.0015 and 0.1172± 0.0013 at NLO and NNLO, respectively, with the central values changing to α _S(M_Z^2)=0.1195 and 0.1178 when the world average of α _S(MZ2) is used as a data point. We investigate the interplay between the uncertainties on α _S(MZ2) and on the PDFs. In particular we calculate the cross sections for key processes at the LHC and show how the uncertainties from the PDFs and from α _S(MZ2) can be provided independently and be combined.
NASA Astrophysics Data System (ADS)
Ichoku, C. M.; Petrenko, M.
2013-05-01
Aerosols are tiny particles suspended in the air, and can be made up of wind-blown dust, smoke from fires, and particulate emissions from automobiles, industries, and other natural and man-made sources. Aerosols can have significant impacts on the air quality, and can interact with clouds and solar radiation in such a way as to affect the water cycle and climate. However, the extent and scale of these impacts are still poorly understood, and this represents one of the greatest uncertainties in climate research to date. To fill this gap in our knowledge, the global and local properties of atmospheric aerosols are being extensively observed and measured, especially during the last decade, using both satellite and ground-based instruments, including such spaceborne sensors as MODIS on the Terra and Aqua satellites, MISR on Terra, OMI on Aura, POLDER on PARASOL, CALIOP on CALIPSO, SeaWiFS on SeaStar, and the ground-based Aerosol Robotic Network (AERONET) of sunphotometers. The aerosol measurements collected by these instruments over the last decade contribute to an unprecedented availability of the most complete set of complimentary aerosol measurements ever acquired. Still, to be able to utilize these measurements synergistically, they have to be carefully and uniformly analyzed and inter-compared, in order to understand the uncertainties and limitations of the products - a process that is greatly complicated by the diversity of differences that exist among them. In this presentation, we will show results of a coherent comparative uncertainty analysis of aerosol measurements from the above-named satellite sensors relative to AERONET. We use these results to demonstrate how these sensors perform in different parts of the world over different landcover types as well as their performance relative to one another, thereby facilitating product selection and integration for specific research and applications needs.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a
Uncertainty analysis of flow rate measurement for multiphase flow using CFD
NASA Astrophysics Data System (ADS)
Kim, Joon-Hyung; Jung, Uk-Hee; Kim, Sung; Yoon, Joon-Yong; Choi, Young-Seok
2015-10-01
The venturi meter has an advantage in its use, because it can measure flow without being much affected by the type of the measured fluid or flow conditions. Hence, it has excellent versatility and is being widely applied in many industries. The flow of a liquid containing air is a representative example of a multiphase flow and exhibits complex flow characteristics. In particular, the greater the gas volume fraction (GVF), the more inhomogeneous the flow becomes. As a result, using a venturi meter to measure the rate of a flow that has a high GVF generates an error. In this study, the cause of the error occurred in measuring the flow rate for the multiphase flow when using the venturi meter for analysis by CFD. To ensure the reliability of this study, the accuracy of the multiphase flow models for numerical analysis was verified through comparison between the calculated results of numerical analysis and the experimental data. As a result, the Grace model, which is a multiphase flow model established by an experiment with water and air, was confirmed to have the highest reliability. Finally, the characteristics of the internal flow field about the multiphase flow analysis result generated by applying the Grace model were analyzed to find the cause of the uncertainty occurring when measuring the flow rate of the multiphase flow using the venturi meter. A phase separation phenomenon occurred due to a density difference of water and air inside the venturi, and flow inhomogeneity happened according to the flow velocity difference of each phase. It was confirmed that this flow inhomogeneity increased as the GVF increased due to the uncertainty of the flow measurement.
Climate change impacts on extreme events in the United States: an uncertainty analysis
Extreme weather and climate events, such as heat waves, droughts and severe precipitation events, have substantial impacts on ecosystems and the economy. However, future climate simulations display large uncertainty in mean changes. As a result, the uncertainty in future changes ...
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
NASA Astrophysics Data System (ADS)
Johnson, B.; Clark, D.; Feinholz, M.; Flora, S.; Franz, B.; Houlihan, T.; Mueller, J. A.; Parr, A. C.; Voss, K. J.; Yarbrough, M.
2011-12-01
Substantial effort has been invested by NASA to create and maintain a long-term, consistent, and calibrated time series of ocean color radiometry over multiple missions and satellite sensors. This is a very difficult measurement problem because the water-leaving radiance is a small fraction of the total radiance measured by the satellite sensor. As a result, the SI traceability of ocean color radiometric values relies completely on a vicarious calibration approach utilizing reference oceanic sites. A robust and rigorous uncertainty analysis of this data set is outstanding. Broadly speaking, there are three aspects to the uncertainty budget for the long-term time series of the global ocean color radiometric data set: the in situ radiometric time series, the in situ to satellite match-up time series for determination of the vicarious calibration gain coefficients, and the global, satellite derived values for water-leaving radiances (or remote sensing reflectances). The uncertainty budget has elements attributed to sensor characterization functions (which change in time), natural variability, and the veracity and efficacy of the measurement equations (including models and algorithms) that describe the complete methodology. We have recently undertaken a rigorous analysis of uncertainty of the global ocean color radiometric time series data set, emphasizing the in situ uncertainties and their impact on the ocean color time series. Our technical approach is to formulate and analyze measurement equations that model the relationships between the values of the measured quantities and the resulting uncertainties, thus establishing traceability of the values of the MOBY results to stated reference values. Uncertainty estimates are quantitative data products in and of themselves - documentation of discrepancies between results and associating these values with uncertainties is not a valid or sufficient approach. We will review the MOBY data set, explain our uncertainty model
TRITIUM UNCERTAINTY ANALYSIS FOR SURFACE WATER SAMPLES AT THE SAVANNAH RIVER SITE
Atkinson, R.
2012-07-31
Radiochemical analyses of surface water samples, in the framework of Environmental Monitoring, have associated uncertainties for the radioisotopic results reported. These uncertainty analyses pertain to the tritium results from surface water samples collected at five locations on the Savannah River near the U.S. Department of Energy's Savannah River Site (SRS). Uncertainties can result from the field-sampling routine, can be incurred during transport due to the physical properties of the sample, from equipment limitations, and from the measurement instrumentation used. The uncertainty reported by the SRS in their Annual Site Environmental Report currently considers only the counting uncertainty in the measurements, which is the standard reporting protocol for radioanalytical chemistry results. The focus of this work is to provide an overview of all uncertainty components associated with SRS tritium measurements, estimate the total uncertainty according to ISO 17025, and to propose additional experiments to verify some of the estimated uncertainties. The main uncertainty components discovered and investigated in this paper are tritium absorption or desorption in the sample container, HTO/H{sub 2}O isotopic effect during distillation, pipette volume, and tritium standard uncertainty. The goal is to quantify these uncertainties and to establish a combined uncertainty in order to increase the scientific depth of the SRS Annual Site Environmental Report.
Broekhuizen, Henk; Groothuis-Oudshoorn, Catharina G M; van Til, Janine A; Hummel, J Marjan; IJzerman, Maarten J
2015-05-01
Multi-criteria decision analysis (MCDA) is increasingly used to support decisions in healthcare involving multiple and conflicting criteria. Although uncertainty is usually carefully addressed in health economic evaluations, whether and how the different sources of uncertainty are dealt with and with what methods in MCDA is less known. The objective of this study is to review how uncertainty can be explicitly taken into account in MCDA and to discuss which approach may be appropriate for healthcare decision makers. A literature review was conducted in the Scopus and PubMed databases. Two reviewers independently categorized studies according to research areas, the type of MCDA used, and the approach used to quantify uncertainty. Selected full text articles were read for methodological details. The search strategy identified 569 studies. The five approaches most identified were fuzzy set theory (45% of studies), probabilistic sensitivity analysis (15%), deterministic sensitivity analysis (31%), Bayesian framework (6%), and grey theory (3%). A large number of papers considered the analytic hierarchy process in combination with fuzzy set theory (31%). Only 3% of studies were published in healthcare-related journals. In conclusion, our review identified five different approaches to take uncertainty into account in MCDA. The deterministic approach is most likely sufficient for most healthcare policy decisions because of its low complexity and straightforward implementation. However, more complex approaches may be needed when multiple sources of uncertainty must be considered simultaneously.
Automation and uncertainty analysis of a method for in-vivo range verification in particle therapy.
Frey, K; Unholtz, D; Bauer, J; Debus, J; Min, C H; Bortfeld, T; Paganetti, H; Parodi, K
2014-10-01
We introduce the automation of the range difference calculation deduced from particle-irradiation induced β(+)-activity distributions with the so-called most-likely-shift approach, and evaluate its reliability via the monitoring of algorithm- and patient-specific uncertainty factors. The calculation of the range deviation is based on the minimization of the absolute profile differences in the distal part of two activity depth profiles shifted against each other. Depending on the workflow of positron emission tomography (PET)-based range verification, the two profiles under evaluation can correspond to measured and simulated distributions, or only measured data from different treatment sessions. In comparison to previous work, the proposed approach includes an automated identification of the distal region of interest for each pair of PET depth profiles and under consideration of the planned dose distribution, resulting in the optimal shift distance. Moreover, it introduces an estimate of uncertainty associated to the identified shift, which is then used as weighting factor to 'red flag' problematic large range differences. Furthermore, additional patient-specific uncertainty factors are calculated using available computed tomography (CT) data to support the range analysis. The performance of the new method for in-vivo treatment verification in the clinical routine is investigated with in-room PET images for proton therapy as well as with offline PET images for proton and carbon ion therapy. The comparison between measured PET activity distributions and predictions obtained by Monte Carlo simulations or measurements from previous treatment fractions is performed. For this purpose, a total of 15 patient datasets were analyzed, which were acquired at Massachusetts General Hospital and Heidelberg Ion-Beam Therapy Center with in-room PET and offline PET/CT scanners, respectively. Calculated range differences between the compared activity distributions are reported in a
KODELI, IVAN-ALEXANDER
2008-05-22
latest versions available from NEA-DB). o The memory and data management was updated as well as the language level (code was rewritten from Fortran-77 to Fortran-95). SUSD3D is coupled to several discrete‑ordinates codes via binary interface files. SUSD3D can use the flux moment files produced by discrete ordinates codes: ANISN, DORT, TORT, ONEDANT, TWODANT, and THREEDANT. In some of these codes minor modifications are required. Variable dimensions used in the TORT‑DORT system are supported. In 3D analysis the geometry and material composition is taken directly from the TORT produced VARSCL binary file, reducing in this way the user's input to SUSD3D. Multigroup cross‑section sets are read in the GENDF format of the NJOY/GROUPR code system, and the covariance data are expected in the COVFIL format of NJOY/ERRORR or the COVERX format of PUFF‑2. The ZZ‑VITAMIN‑J/COVA cross section covariance matrix library can be used as an alternative to the NJOY code system. The package includes the ANGELO code to produce the covariance data in the required energy structure in the COVFIL format. The following cross section processing modules to be added to the NJOY‑94 code system are included in the package: o ERR34: an extension of the ERRORR module of the NJOY code system for the File‑34 processing. It is used to prepare multigroup SAD cross sections covariance matrices. o GROUPSR: An additional code module for the preparation of partial cross sections for SAD sensitivity analysis. Updated version of the same code from SUSD, extended to the ENDF‑6 format. o SEADR: An additional code module to prepare group covariance matrices for SAD/SED uncertainty analysis.
2008-05-22
are the latest versions available from NEA-DB). o The memory and data management was updated as well as the language level (code was rewritten from Fortran-77 to Fortran-95). SUSD3D is coupled to several discrete‑ordinates codes via binary interface files. SUSD3D can use the flux moment files produced by discrete ordinates codes: ANISN, DORT, TORT, ONEDANT, TWODANT, and THREEDANT. In some of these codes minor modifications are required. Variable dimensions used in the TORT‑DORT system are supported. In 3D analysis the geometry and material composition is taken directly from the TORT produced VARSCL binary file, reducing in this way the user's input to SUSD3D. Multigroup cross‑section sets are read in the GENDF format of the NJOY/GROUPR code system, and the covariance data are expected in the COVFIL format of NJOY/ERRORR or the COVERX format of PUFF‑2. The ZZ‑VITAMIN‑J/COVA cross section covariance matrix library can be used as an alternative to the NJOY code system. The package includes the ANGELO code to produce the covariance data in the required energy structure in the COVFIL format. The following cross section processing modules to be added to the NJOY‑94 code system are included in the package: o ERR34: an extension of the ERRORR module of the NJOY code system for the File‑34 processing. It is used to prepare multigroup SAD cross sections covariance matrices. o GROUPSR: An additional code module for the preparation of partial cross sections for SAD sensitivity analysis. Updated version of the same code from SUSD, extended to the ENDF‑6 format. o SEADR: An additional code module to prepare group covariance matrices for SAD/SED uncertainty analysis.« less
Uncertainty Analysis for the Evaluation of a Passive Runway Arresting System
NASA Technical Reports Server (NTRS)
Deloach, Richard; Marlowe, Jill M.; Yager, Thomas J.
2009-01-01
This paper considers the stopping distance of an aircraft involved in a runway overrun incident when the runway has been provided with an extension comprised of a material engineered to induce high levels of rolling friction and drag. A formula for stopping distance is derived that is shown to be the product of a known formula for the case of friction without drag, and a dimensionless constant between 0 and 1 that quantifies the further reduction in stopping distance when drag is introduced. This additional quantity, identified as the Drag Reduction Factor, D, is shown to depend on the ratio of drag force to friction force experienced by the aircraft as it enters the overrun area. The specific functional form of D is shown to depend on how drag varies with speed. A detailed uncertainty analysis is presented which reveals how the uncertainty in estimates of stopping distance are influenced by experimental error in the force measurements that are acquired in a typical evaluation experiment conducted to assess candidate overrun materials.
Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.
Burrows, Wesley; Doherty, John
2015-01-01
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.
An uncertainty analysis of air pollution externalities from road transport in Belgium in 2010.
Int Panis, L; De Nocker, L; Cornelis, E; Torfs, R
2004-12-01
Although stricter standards for vehicles will reduce emissions to air significantly by 2010, a number of problems will remain, especially related to particulate concentrations in cities, ground-level ozone, and CO(2). To evaluate the impacts of new policy measures, tools need to be available that assess the potential benefits of these measures in terms of the vehicle fleet, fuel choice, modal choice, kilometers driven, emissions, and the impacts on public health and related external costs. The ExternE accounting framework offers the most up to date and comprehensive methodology to assess marginal external costs of energy-related pollutants. It combines emission models, air dispersion models at local and regional scales with dose-response functions and valuation rules. Vito has extended this accounting framework with data and models related to the future composition of the vehicle fleet and transportation demand to evaluate the impact of new policy proposals on air quality and aggregated (total) external costs by 2010. Special attention was given to uncertainty analysis. The uncertainty for more than 100 different parameters was combined in Monte Carlo simulations to assess the range of possible outcomes and the main drivers of these results. Although the impacts from emission standards and total fleet mileage look dominant at first, a number of other factors were found to be important as well. This includes the number of diesel vehicles, inspection and maintenance (high-emitter cars), use of air conditioning, and heavy duty transit traffic.
A comparison of five forest interception models using global sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Linhoss, Anna C.; Siegert, Courtney M.
2016-07-01
Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.
Fang, Shoufan; Wente, Stephen; Gertner, George Z; Wang, Guangxing; Anderson, Alan
2002-08-01
The US Army Engineering Research Development Center (ERDC) uses a modified form of the Revised Universal Soil Loss Equation (RUSLE) to estimate spatially explicit rates of soil erosion by water across military training facilities. One modification involves the RUSLE support practice factor (P factor), which is used to account for the effect of disturbance by human activities on erosion rates. Since disturbance from off-road military vehicular traffic moving through complex landscapes varies spatially, a spatially explicit nonlinear regression model (disturbance model) is used to predict the distribution of P factor values across a training facility. This research analyzes the uncertainty in this model's disturbance predictions for the Fort Hood training facility in order to determine both the spatial distribution of prediction uncertainty and the contribution of different error sources to that uncertainty. This analysis shows that a three-category vegetation map used by the disturbance model was the greatest source of prediction uncertainty, especially for the map categories shrub and tree. In areas mapped as grass, modeling error (uncertainty associated with the model parameter estimates) was the largest uncertainty source. These results indicate that the use of a high-quality vegetation map that is periodically updated to reflect current vegetation distributions, would produce the greatest reductions in disturbance prediction uncertainty.
Omeroglu, P Yolci; Ambrus, A; Boyacioglu, D
2013-01-01
Extraction and clean-up constitute important steps in pesticide residue analysis. For the correct interpretation of analytical results, uncertainties of extraction and clean-up steps should be taken into account when the combined uncertainty of the analytical result is estimated. In the scope of this study, uncertainties of extraction and clean-up steps were investigated by spiking (14)C-labelled chlorpyrifos to analytical portions of tomato, orange, apple, green bean, cucumber, jackfruit, papaya and starfruit. After each step, replicate measurements were carried out with a liquid scintillation counter. Uncertainties in extraction and clean-up steps were estimated separately for every matrix and method combination by using within-laboratory reproducibility standard deviation and were characterised with the CV of recoveries. It was observed that the uncertainty of the ethyl acetate extraction step varied between 0.8% and 5.9%. The relative standard uncertainty of the clean-up step with dispersive SPE used in the method known as QuEChERS was estimated to be around 1.5% for tomato, apple and green beans. The highest variation of 4.8% was observed in cucumber. The uncertainty of the clean-up step with gel permeation chromatography ranged between 5.3% and 13.1%, and it was relatively higher than that obtained with the dispersive SPE method.
Uncertainty Analysis of Radar and Gauge Rainfall Estimates in the Russian River Basin
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Willie, D.; Reynolds, D.; Campbell, C.; Sukovich, E.
2013-12-01
Radar Quantitative Precipitation Estimation (QPE) has been a very important application of weather radar since it was introduced and made widely available after World War II. Although great progress has been made over the last two decades, it is still a challenging process especially in regions of complex terrain such as the western U.S. It is also extremely difficult to make direct use of radar precipitation data in quantitative hydrologic forecasting models. To improve the understanding of rainfall estimation and distributions in the NOAA Hydrometeorology Testbed in northern California (HMT-West), extensive evaluation of radar and gauge QPE products has been performed using a set of independent rain gauge data. This study focuses on the rainfall evaluation in the Russian River Basin. The statistical properties of the different gridded QPE products will be compared quantitatively. The main emphasis of this study will be on the analysis of uncertainties of the radar and gauge rainfall products that are subject to various sources of error. The spatial variation analysis of the radar estimates is performed by measuring the statistical distribution of the radar base data such as reflectivity and by the comparison with a rain gauge cluster. The application of mean field bias values to the radar rainfall data will also be described. The uncertainty analysis of the gauge rainfall will be focused on the comparison of traditional kriging and conditional bias penalized kriging (Seo 2012) methods. This comparison is performed with the retrospective Multisensor Precipitation Estimator (MPE) system installed at the NOAA Earth System Research Laboratory. The independent gauge set will again be used as the verification tool for the newly generated rainfall products.
Report on INL Activities for Uncertainty Reduction Analysis of FY11
G. Plamiotti; H. Hiruta; M. Salvatores
2011-09-01
This report presents the status of activities performed at INL under the ARC Work Package on 'Uncertainty Reduction Analyses' that has a main goal the reduction of uncertainties associated with nuclear data on neutronic integral parameters of interest for the design of advanced fast reactors under consideration by the ARC program. First, an analysis of experiments was carried out. For both JOYO (the first Japanese fast reactor) and ZPPR-9 (a large size zero power plutonium fueled experiment performed at ANL-W in Idaho) the performance of ENDF/B-VII.0 is quite satisfying except for the sodium void configurations of ZPPR-9, but for which one has to take into account the approximation of the modeling. In fact, when one uses a more detailed model (calculations performed at ANL in a companion WP) more reasonable results are obtained. A large effort was devoted to the analysis of the irradiation experiments, PROFIL-1 and -2 and TRAPU, performed at the French fast reactor PHENIX. For these experiments a pre-release of the ENDF/B-VII.1 cross section files was also used, in order to provide validation feedback to the CSWEG nuclear data evaluation community. In the PROFIL experiments improvements can be observed for the ENDF/B-VII.1 capture data in 238Pu, 241Am, 244Cm, 97Mo, 151Sm, 153Eu, and for 240Pu(n,2n). On the other hand, 240,242Pu, 95Mo, 133Cs and 145Nd capture C/E results are worse. For the major actinides 235U and especially 239Pu capture C/E's are underestimated. For fission products, 105,106Pd, 143,144Nd and 147,149Sm are significantly underestimated, while 101Ru and 151Sm are overestimated. Other C/E deviations from unity are within the combined experimental and calculated statistical uncertainty. From the TRAPU analysis, the major improvement is in the predicted 243Cm build-up, presumably due to an improved 242Cm capture evaluation. The COSMO experiment was also analyzed in order to provide useful feedback on fission cross sections. It was found out that ENDF
Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review
NASA Technical Reports Server (NTRS)
Tripp, John S.
1999-01-01
This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.
Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.
1994-12-01
This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different
Accounting for Multiple Sources of Uncertainty in the Statistical Analysis of Holocene Sea Levels
NASA Astrophysics Data System (ADS)
Cahill, N.; Parnell, A. C.; Kemp, A.; Horton, B.
2014-12-01
We perform a Bayesian statistical analysis on historical and late Holocene rates of sea-level change. The data that form the input to the statistical model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. The aims are to estimate rates of sea-level change, to determine when modern rates of rise began and to observe how these rates have evolved over time. Many current methods for doing this use simple linear regression to estimate rates. This is often inappropriate as it is too rigid and it can ignore uncertainties that arise as part of the data collection exercise. This can lead to over-confidence in the sea-level trends being characterized. The proposed model places a Gaussian process prior on the rate process (i.e. the process that determines how rates of sea-level are changing over time). The likelihood of the observed data is the integral of this process. When dealing with proxy reconstructions, the model is set in an errors-in-variables framework so as to take account of age uncertainty. It is also necessary to account for glacio-isostatic adjustment, which introduces a covariance between individual age and sea-level observations. This method allows for the estimation of the rate process with full consideration of all sources of uncertainty. The model captures the continuous and dynamic evolution of sea-level change and results show that modern rates of rise are consistently increasing. Analysis of a global tide-gauge record (Church and White, 2011) indicated that the rate of sea-level rise increased continuously since 1880AD and is currently 1.9mm/yr (95% credible interval of 1.84 to 2.03mm/yr). Applying the model to a proxy reconstruction from North Carolina (Kemp et al., 2011) indicated that the mean rate of rise in this locality since the middle of the 19th century (current rate of 2.44 mm/yr with a 95% credible interval of 1.91 to 3.01mm/yr) is unprecedented in at least the last 2000 years.
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Components of uncertainty in species distribution analysis: a case study of the Great Grey Shrike.
Dormann, Carsten F; Purschke, Oliver; García Márquez, Jaime R; Lautenbach, Sven; Schröder, Boris
2008-12-01
Sophisticated statistical analyses are common in ecological research, particularly in species distribution modeling. The effects of sometimes arbitrary decisions during the modeling procedure on the final outcome are difficult to assess, and to date are largely unexplored. We conducted an analysis quantifying the contribution of uncertainty in each step during the model-building sequence to variation in model validity and climate change projection uncertainty. Our study system was the distribution of the Great Grey Shrike in the German federal state of Saxony. For each of four steps (data quality, collinearity method, model type, and variable selection), we ran three different options in a factorial experiment, leading to 81 different model approaches. Each was subjected to a fivefold cross-validation, measuring area under curve (AUC) to assess model quality. Next, we used three climate change scenarios times three precipitation realizations to project future distributions from each model, yielding 729 projections. Again, we analyzed which step introduced most variability (the four model-building steps plus the two scenario steps) into predicted species prevalences by the year 2050. Predicted prevalences ranged from a factor of 0.2 to a factor of 10 of present prevalence, with the majority of predictions between 1.1 and 4.2 (inter-quartile range). We found that model type and data quality dominated this analysis. In particular, artificial neural networks yielded low cross-validation robustness and gave very conservative climate change predictions. Generalized linear and additive models were very similar in quality and predictions, and superior to neural networks. Variations in scenarios and realizations had very little effect, due to the small spatial extent of the study region and its relatively small range of climatic conditions. We conclude that, for climate projections, model type and data quality were the most influential factors. Since comparison of model
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
NASA Astrophysics Data System (ADS)
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Uncertainty of tipping elements on risk analysis in hydrology under climate change
NASA Astrophysics Data System (ADS)
Kiguchi, M.; Iseri, Y.; Tawatari, R.; Kanae, S.; Oki, T.
2015-12-01
Risk analysis in this study characterizes the events that could be caused by climate change and estimates their effects on society. In order to characterize climate change risks, events that might be caused by climate change will be investigated focusing on critical geophysical phenomena such as changes in thermohaline circulation (THC) in oceans and the large-scale melting of the Greenland and other ice sheets. The results of numerical experiments with climate models and paleoclimate studies will be referenced in listing up these phenomena. The trigger mechanisms, tendency to occur and relationship of these phenomena to global climate will be clarified. To clarify that relationship between the RCP scenarios and tipping elements, we identified which year tipping elements in case of "Arctic summer sea ice" and "Greenland ice sheet" are appeared using the increase of global average temperature in 5 GCMs under RCP (2.6, 4.5, 6.0, and 8.5) from Zickfeld et al. (2013) and IPCC (2013), and tipping point of each tipping elements from IPCC (2013). In case of "Greenland ice sheet" (Tipping point takes a value within the range of 1.0oC and 4.0oC), we found that "Greenland ice sheet" may melt down when the tipping point is 1.0oC as lowest value. On the other hand, when tipping point sets as 4.0oC, it may not melt down except for RCP 8.5. As above, we show the uncertainty of tipping point itself. In future, it is necessary how to reflect such uncertainty in risk analysis in hydrology.
Uncertainty in patient set-up margin analysis in radiation therapy.
Suzuki, Junji; Tateoka, Kunihiko; Shima, Katsumi; Yaegashi, Yuji; Fujimoto, Kazunori; Saitoh, Yuichi; Nakata, Akihiro; Abe, Tadanori; Nakazawa, Takuya; Sakata, Kouichi; Hareyama, Masato
2012-07-01
We investigated the uncertainty in patient set-up margin analysis with a small dataset consisting of a limited number of clinical cases over a short time period, and propose a method for determining the optimum set-up margin. Patient set-up errors from 555 registration images of 15 patients with prostate cancer were tested for normality using a quantile-quantile (Q-Q) plot and a Kolmogorov-Smirnov test with the hypothesis that the data were not normally distributed. The ranges of set-up errors include the set-up errors within the 95% interval of the entire patient data histogram, and their equivalent normal distributions were compared. The patient set-up error was not normally distributed. When the patient set-up error distribution was assumed to have a normal distribution, an underestimate of the actual set-up error occurred in some patients but an overestimate occurred in others. When using a limited dataset for patient set-up errors, which consists of only a small number of the cases over a short period of time in a clinical practice, the 2.5% and 97.5% intervals of the actual patient data histogram from the percentile method should be used for estimating the set-up margin. Since set-up error data is usually not normally distributed, these intervals should provide a more accurate estimate of set-up margin. In this way, the uncertainty in patient set-up margin analysis in radiation therapy can be reduced.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Li, S.
2014-12-01
Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner. PMID:27280456
Uncertainty in the analysis of the overall equipment effectiveness on the shop floor
NASA Astrophysics Data System (ADS)
Rößler, M. P.; Abele, E.
2013-06-01
In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.
Tomlinson, M J
2016-09-01
This article suggests that diagnostic semen analysis has no more clinical value today than it had 25-30 years ago, and both the confusion surrounding its evidence base (in terms of relationship with conception) and the low level of confidence in the clinical setting is attributable to an associated high level of 'uncertainty'. Consideration of the concept of measurement uncertainty is mandatory for medical laboratories applying for the ISO15189 standard. It is evident that the entire semen analysis process is prone to error every step from specimen collection to the reporting of results and serves to compound uncertainty associated with diagnosis or prognosis. Perceived adherence to published guidelines for the assessment of sperm concentration, motility and morphology does not guarantee a reliable and reproducible test result. Moreover, the high level of uncertainty associated with manual sperm motility and morphology can be attributed to subjectivity and lack a traceable standard. This article describes where and why uncertainty exists and suggests that semen analysis will continue to be of limited value until it is more adequately considered and addressed. Although professional guidelines for good practice have provided the foundations for testing procedures for many years, the risk in following rather prescriptive guidance to the letter is that unless they are based on an overwhelmingly firm evidence base, the quality of semen analysis will remain poor and the progress towards the development of more innovative methods for investigating male infertility will be slow. PMID:27529487
Tomlinson, M J
2016-09-01
This article suggests that diagnostic semen analysis has no more clinical value today than it had 25-30 years ago, and both the confusion surrounding its evidence base (in terms of relationship with conception) and the low level of confidence in the clinical setting is attributable to an associated high level of 'uncertainty'. Consideration of the concept of measurement uncertainty is mandatory for medical laboratories applying for the ISO15189 standard. It is evident that the entire semen analysis process is prone to error every step from specimen collection to the reporting of results and serves to compound uncertainty associated with diagnosis or prognosis. Perceived adherence to published guidelines for the assessment of sperm concentration, motility and morphology does not guarantee a reliable and reproducible test result. Moreover, the high level of uncertainty associated with manual sperm motility and morphology can be attributed to subjectivity and lack a traceable standard. This article describes where and why uncertainty exists and suggests that semen analysis will continue to be of limited value until it is more adequately considered and addressed. Although professional guidelines for good practice have provided the foundations for testing procedures for many years, the risk in following rather prescriptive guidance to the letter is that unless they are based on an overwhelmingly firm evidence base, the quality of semen analysis will remain poor and the progress towards the development of more innovative methods for investigating male infertility will be slow.
Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.
1998-09-01
The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide
Frey, H. Christopher; Rhodes, David S.
1999-04-30
This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo; Redder, Christopher
2010-01-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum
NASA Astrophysics Data System (ADS)
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
NASA Astrophysics Data System (ADS)
Sittig, S.; Vrugt, J. A.; Kasteel, R.; Groeneweg, J.; Vereecken, H.
2011-12-01
Persistent antibiotics in the soil potentially contaminate the groundwater and affect the quality of drinking water. To improve our understanding of antibiotic transport in soils, we performed laboratory transport experiments in soil columns under constant irrigation conditions with repeated applications of chloride and radio-labeled SDZ. The tracers were incorporated in the first centimeter, either with pig manure or with solution. Breakthrough curves and concentration profiles of the parent compound and the main transformation products were measured. The goal is to describe the observed nonlinear and kinetic transport behavior of SDZ. Our analysis starts with synthetic transport data for the given laboratory flow conditions for tracers which exhibit increasingly complex interactions with the solid phase. This first step is necessary to benchmark our inverse modeling approach for ideal situations. Then we analyze the transport behavior using the column experiments in the laboratory. Our analysis uses a Markov chain Monte Carlo sampler (Differential Evolution Adaptive Metropolis algorithm, DREAM) to efficiently search the parameter space of an advective-dispersion model. Sorption of the antibiotics to the soil was described using a model regarding reversible as well as irreversible sorption. This presentation will discuss our initial findings. We will present the data of our laboratory experiments along with an analysis of parameter uncertainty.
Analysis of the uncertainty in rainfall forecasts obtained with a probabilistic nowcasting technique
NASA Astrophysics Data System (ADS)
Buil, Álex; Berenguer, Marc; Sempere-Torres, Daniel
2014-05-01
Nowadays different methodologies have been developed for very short-term precipitation forecasting based on radar observations. When the advection of precipitation explains a significant portion of the temporal evolution of precipitation, the Lagrangian persistence is the most appropriate method. Unfortunately, in convective precipitation episodes it does not occur like this because the growth and decay of precipitation is generally fast and advection provides little information. It is then necessary to introduce probabilistic nowcasting methods that allow to characterize the uncertainty associated with the temporal evolution of precipitation. SBMcast (Berenguer et al., 2011) is an ensemble nowcasting algorithm based on Lagrangian extrapolation of recent radar observations. It generates a set of future rainfall scenarios (ensemble members) compatible with the observations and preserving the spatial and temporal structure of the rainfall field according to the String of Beads model. The parameters used to generate a member of the ensemble model are the time series of a set of variables that model the rainfall field at two levels: at global and at pixel scale. We have analyzed these two components of SBMcast with the aim of identifying the role that each component has in the resulting forecast uncertainty. The final objective of this analysis is understanding the expected impact of the use of additional information to constrain each part of the algorithm. Conventional scores have been used to compare SBMcast with two reference algorithms: deterministic Lagrangian extrapolation, and the probabilistic "Local Lagrangian" technique [the one that demonstrated the best skill, among those analyzed by Germann and Zawadzki (2004)]. The results have been obtained for a set of rainfall episodes in the vicinity of Barcelona, Catalonia (Spain) using the observations of the Catalan Weather Service radar network. References Berenguer, M., D. Sempere-Torres, and G. Pegram, 2011
NASA Astrophysics Data System (ADS)
Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.
2016-04-01
multivariate posterior distribution. The novelty of our approach and the major difference compared to the traditional semblance spectrum velocity analysis procedure is the calculation of uncertainty of the output model. As the model is able to estimate the credibility intervals of the corresponding interval velocities, we can produce the most probable PSDM images in an iterative manner. The depths extracted using our statistical algorithm are in very good agreement with the key horizons retrieved from the drilled core DSDP-258, showing that the Bayesian model is able to control the depth migration of the seismic data and estimate the uncertainty to the drilling targets.
Clarke, J.U.; McFarland, V.A.
2000-02-01
In regulatory evaluations of contaminated sediments, an equilibrium partitioning-based screening test called theoretical bioaccumulation potential (TBP) is often performed to estimate the probable concentrations of neutral organic contaminants that would eventually accumulate in aquatic organisms from continuous exposure to a sediment. The TBP is calculated from contaminant concentration and organic carbon content of the sediment, lipid content of target organisms, and a partition coefficient, usually the biota-sediment accumulation factor (BSAF). However, routine applications of TBP have not included analysis of uncertainty. This paper demonstrates two methods for uncertainty analysis of TBP: a computational method that incorporates random and systematic error and a simulation method using bootstrap resampling of replicated model input parameters to calculate statistical uncertainty measures. For prediction of polynuclear aromatic hydrocarbon (PAH) bioaccumulation in bivalves exposed to contaminated sediments, uncertainty as a factor of TBP ranged from 1.2 to 4.8 using the computational method and 0.5 to 1.9 based on bootstrap 95% confidence intervals. Sensitivity analysis indicated that BSAF parameters, especially tissue contaminant concentration and lipid content, contributed most to TBP uncertainty. In bootstrap tests of significance, TBP significantly over- or underestimated actual PAH bioaccumulation in bivalves in 41% and 10% of comparisons, respectively.
Best-estimate plus uncertainty thermal-hydraulic stability analysis of BWRs using TRACG code
Vedovi, J.; Yang, J.; Klebanov, L.; Vreeland, D. G.; Zino, J. F.
2012-07-01
on the Minimum Critical Power Ratio (MCPR) performance. The purpose of the DSS-CD TRACG analysis is to confirm the inherent MCPR margin afforded by the solution design. This paper presents the Best Estimate Plus Uncertainty (BEPU) DSS-CD TRACG methodology and its application to BWR Thermal-Hydraulic (T-H) stability analyses. The statistical Code Scaling, Applicability and Uncertainty (CSAU) methodology (defined in NUREG/CR-5249) is used to calculate the MCPR uncertainty. The TRACG simulation includes a full core individual bundle model in which each fuel bundle is modeled as an individual T-H channel. The complete CSAU analysis of full core individual bundle model is an innovative solution represents the state-of-the-art stability analysis of BWRs and is the first ever full statistical analysis for stability safety analyses. The adoption of BEPU methodologies for stability analyses advances the understanding of the associated physical phenomena and maintains the safety of reactor plant operation in expanded operation domain with up-rated power. (authors)
Uncertainty Analysis of non-point source pollution control facilities design techniques in Korea
NASA Astrophysics Data System (ADS)
Lee, J.; Okjeong, L.; Gyeong, C. B.; Park, M. W.; Kim, S.
2015-12-01
The design of non-point sources control facilities in Korea is divided largely by the stormwater capture ratio, the stormwater load capture ratio, and the pollutant reduction efficiency of the facility. The stormwater capture ratio is given by a design formula as a function of the water quality treatment capacity, the greater the capacity, the more the amount of stormwater intercepted by the facility. The stormwater load capture ratio is defined as the ratio of the load entering the facility of the total pollutant load generated in the target catchment, and is given as a design formula represented by a function of the stormwater capture ratio. In order to estimate the stormwater capture ratio and load capture ratio, a lot of quantitative analysis of hydrologic processes acted in pollutant emission is required, but these formulas have been applied without any verification. Since systematic monitoring programs were insufficient, verification of these formulas was fundamentally impossible. However, recently the Korean ministry of Environment has conducted an long-term systematic monitoring project, and thus the verification of the formulas became possible. In this presentation, the stormwater capture ratio and load capture ratio are re-estimated using actual TP data obtained from long-term monitoring program at Noksan industrial complex located in Busan, Korea. Through the re-estimated process, the uncertainty included in the design process that has been applied until now will be shown in a quantitative extent. In addition, each uncertainty included in the stormwater capture ratio estimation and in the stormwater load capture ratio estimation will be expressed to quantify the relative impact on the overall non-point pollutant control facilities design process. Finally, the SWMM-Matlab interlocking module for model parameters estimation will be introduced. Acknowledgement This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non
Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis
NASA Astrophysics Data System (ADS)
Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.
2009-12-01
Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The
Gaming Change: A Many-objective Analysis of Water Supply Portfolios under Uncertainty
NASA Astrophysics Data System (ADS)
Reed, P. M.; Kasprzyk, J.; Characklis, G.; Kirsch, B.
2008-12-01
This study explores the uncertainty and tradeoffs associated with up to six conflicting water supply portfolio planning objectives. A ten-year Monte Carlo simulation model is used to evaluate water supply portfolios blending permanent rights, adaptive options contracts, and spot leases for a single city in the Lower Rio Grande Valley. Historical records of reservoir mass balance, lease pricing, and demand serve as the source data for the Monte Carlo simulation. Portfolio planning decisions include the initial volume and annual increases of permanent rights, thresholds for an adaptive options contract, and anticipatory decision rules for purchasing leases and exercising options. Our work distinguishes three cases: (1) permanent rights as the sole source of supply, (2) permanent rights and adaptive options, and (3) a combination of permanent rights, adaptive options, and leases. The problems have been formulated such that cases 1 and 2 are sub-spaces of the six objective formulation used for case 3. Our solution sets provide the tradeoff surfaces between portfolios' expected values for cost, cost variability, reliability, frequency of purchasing permanent rights increases, frequency of using leases, and dropped (or unused) transfers of water. The tradeoff surfaces for the three cases show that options and leases have a dramatic impact on the marginal costs associated with improving the efficiency and reliability of urban water supplies. Moreover, our many-objective analysis permits the discovery of a broad range of high quality portfolio strategies. We differentiate the value of adaptive options versus leases by testing a representative subset of optimal portfolios' abilities to effectively address regional increases in demand during drought periods. These results provide insights into the tradeoffs inherent to a more flexible, portfolio-style approach to urban water resources management, an approach that should become increasingly attractive in an environment of
NASA Astrophysics Data System (ADS)
Hill, Mary
2016-04-01
Combining different data types can seem like combining apples and oranges. Yet combining different data types into inverse modeling and uncertainty quantification are important in all types of environmental systems. There are two main methods for combining different data types. - Single objective optimization (SOO) with weighting. - Multi-objective optimization (MOO) in which coefficients for data groups are defined and changed during model development. SOO and MOO are related in that different coefficient values in MOO are equivalent to considering alternative weightings. MOO methods often take many model runs and tend to be much more computationally expensive than SOO, but for SOO the weighting needs to be defined. When alternative models are more important to consider than alternate weightings, SOO can be advantageous (Lu et al. 2012). This presentation considers how to determine the weighting when using SOO. A saltwater intrusion example is used to examine two methods of weighting three data types. The two methods of determining weighting are based on contributions to the objective function, as suggested by Anderson et al. (2015) and error-based weighting, as suggested by Hill and Tiedeman (2007). The consequences of weighting on measures of uncertainty, the importance and interdependence of parameters, and the importance of observations are presented. This work is important to many types of environmental modeling, including climate models, because integrating many kinds of data is often important. The advent of rainfall-runoff models with fewer numerical deamons, such as TOPKAPI and SUMMA, make the convenient model analysis methods used in this work more useful for many hydrologic problems.
Kumar, Arun; Xagoraraki, Irene
2010-01-01
This study presents a step-wise development of a quantitative pharmaceutical risk assessment (QPhRA, hereafter) framework, including Monte Carlo uncertainty analysis for meprobamate, carbamazepine, and phenytoin during (1) accidental exposures of stream water and fish consumption and (2) direct ingestion of finished drinking water for children and adults. Average hazard quotients of these pharmaceuticals (i.e., the ratio of values of chronic daily intake to acceptable daily intake) were found to lie between 1x10(-10) and 3x10(-5) and 99 th percentile values of hazard quotients were found to be less than 1x10(-4) for both sub-populations, indicating no potential risks of adverse effects due to pharmaceuticals exposures. In addition, pharmaceutical concentrations were also observed to be lower than their respective calculated acceptable daily intake-equivalent drinking water levels, indicating no potential human health risks. To the authors' knowledge, for the first time in QPhRA studies, this study has attempted to characterize and quantify effects of factors, such as considerations for sensitive sub-populations using subpopulation-specific toxic endpoints and use of pharmaceutical concentrations in stream and finished drinking waters on risk estimates. Acceptable daily intake was observed to be the primary contributor (>93% variance contribution) in the overall uncertainties of estimates of hazard quotients, followed by fish consumptions and pharmaceutical concentrations in water. Further research efforts are required to standardize use of acceptable daily intake values to reduce large variability in estimation of hazard quotients. PMID:20152876
NASA Astrophysics Data System (ADS)
Li, Xiaobao; Tsai, Frank T.-C.
2009-09-01
This study introduces a Bayesian model averaging (BMA) method that incorporates multiple groundwater models and multiple hydraulic conductivity estimation methods to predict groundwater heads and evaluate prediction uncertainty. BMA is able to distinguish prediction uncertainty arising from individual models, between models, and between methods. Moreover, BMA is able to identify unfavorable models even though they may present small prediction uncertainty. Uncertainty propagation, from model parameter uncertainty to model prediction uncertainty, can also be studied through BMA. This study adopts a variance window to obtain reasonable BMA weights for the best models, which are usually exaggerated by Occam's window. Results from a synthetic case study show that BMA with the variance window can provide better head prediction than individual models, or at least can obtain better predictions close to the best model. The BMA was applied to predicting groundwater heads in the "1500-foot" sand of the Baton Rouge area in Louisiana. Head prediction uncertainty was assessed by the BMA prediction variance. BMA confirms that large head prediction uncertainty occurs at areas lacking head observations and hydraulic conductivity measurements. Further study in these areas is necessary to reduce head prediction uncertainty.
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
Risk analysis under uncertainty, the precautionary principle, and the new EU chemicals strategy.
Rogers, Michael D
2003-06-01
Three categories of uncertainty in relation to risk assessment are defined; uncertainty in effect, uncertainty in cause, and uncertainty in the relationship between a hypothesised cause and effect. The Precautionary Principle (PP) relates to the third type of uncertainty. Three broad descriptions of the PP are set out, uncertainty justifies action, uncertainty requires action, and uncertainty requires a reversal of the burden of proof for risk assessments. The application of the PP is controversial but what matters in practise is the precautionary action (PA) that follows. The criteria by which the PAs should be judged are detailed. This framework for risk assessment and management under uncertainty is then applied to the envisaged European system for the regulation of chemicals. A new EU regulatory system has been proposed which shifts the burden of proof concerning risk assessments from the regulator to the producer, and embodies the PP in all three of its main regulatory stages. The proposals are critically discussed in relation to three chemicals, namely, atrazine (an endocrine disrupter), cadmium (toxic and possibly carcinogenic), and hydrogen fluoride (a toxic, high-production-volume chemical). Reversing the burden of proof will speed up the regulatory process but the examples demonstrate that applying the PP appropriately, and balancing the countervailing risks and the socio-economic benefits, will continue to be a difficult task for the regulator. The paper concludes with a discussion of the role of precaution in the management of change and of the importance of trust in the effective regulation of uncertain risks.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Zhu Zhixi Bai, Hongtao Xu He Zhu Tan
2011-11-15
Strategic environmental assessment (SEA) inherently needs to address greater levels of uncertainty in the formulation and implementation processes of strategic decisions, compared with project environmental impact assessment. The range of uncertainties includes internal and external factors of the complex system that is concerned in the strategy. Scenario analysis is increasingly being used to cope with uncertainty in SEA. Following a brief introduction of scenarios and scenario analysis, this paper examines the rationale for scenario analysis in SEA in the context of China. The state of the art associated with scenario analysis applied to SEA in China was reviewed through four SEA case analyses. Lessons learned from these cases indicated the word 'scenario' appears to be abused and the scenario-based methods appear to be misused due to the lack of understanding of an uncertain future and scenario analysis. However, good experiences were also drawn on, regarding how to integrate scenario analysis into the SEA process in China, how to cope with driving forces including uncertainties, how to combine qualitative scenario storylines with quantitative impact predictions, and how to conduct assessments and propose recommendations based on scenarios. Additionally, the ways to improve the application of this tool in SEA were suggested. We concluded by calling for further methodological research on this issue and more practices.
Using predictive uncertainty analysis to optimise tracer test design and data acquisition
NASA Astrophysics Data System (ADS)
Wallis, Ilka; Moore, Catherine; Post, Vincent; Wolf, Leif; Martens, Evelien; Prommer, Henning
2014-07-01
Tracer injection tests are regularly-used tools to identify and characterise flow and transport mechanisms in aquifers. Examples of practical applications are manifold and include, among others, managed aquifer recharge schemes, aquifer thermal energy storage systems and, increasingly important, the disposal of produced water from oil and shale gas wells. The hydrogeological and geochemical data collected during the injection tests are often employed to assess the potential impacts of injection on receptors such as drinking water wells and regularly serve as a basis for the development of conceptual and numerical models that underpin the prediction of potential impacts. As all field tracer injection tests impose substantial logistical and financial efforts, it is crucial to develop a solid a-priori understanding of the value of the various monitoring data to select monitoring strategies which provide the greatest return on investment. In this study, we demonstrate the ability of linear predictive uncertainty analysis (i.e. “data worth analysis”) to quantify the usefulness of different tracer types (bromide, temperature, methane and chloride as examples) and head measurements in the context of a field-scale aquifer injection trial of coal seam gas (CSG) co-produced water. Data worth was evaluated in terms of tracer type, in terms of tracer test design (e.g., injection rate, duration of test and the applied measurement frequency) and monitoring disposition to increase the reliability of injection impact assessments. This was followed by an uncertainty targeted Pareto analysis, which allowed the interdependencies of cost and predictive reliability for alternative monitoring campaigns to be compared directly. For the evaluated injection test, the data worth analysis assessed bromide as superior to head data and all other tracers during early sampling times. However, with time, chloride became a more suitable tracer to constrain simulations of physical transport
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
NASA Astrophysics Data System (ADS)
Cochran, J. R.; Tinto, K. J.; Elieff, S. H.; Bell, R. E.
2011-12-01
topography. The free parameters in the gravity interpretation are the bed density and mean depth to the bed. Sensitivity analyses using gravity data from OIB Antarctic surveys shows that a 0.1 g/cm3 uncertainty in bed density translates to ~20-30 m uncertainty in bathymetric relief both for formal inversions (e.g., a Parker-Oldenburg inversion) and for forward modeling of individual lines. Uncertainty in the mean depth used in an inversion results in what is almost a DC shift in the bed depth. These results emphasize the need for independent geologic evidence on the nature the bed, and (preferably multiple) tie-points to constrain the bed depth. Where ice is grounded and the bed depth known from radar, gravity data can be used to study the nature of the bed including the presence of sedimentary basins and intrusions. Our analysis shows that, with adequate geologic information, careful interpretation of OIB airborne gravity data can resolve features wider than ~5 km in sub-ice bathymetry to within about ±50 m.
Numerical analysis of the Burgers' equation in the presence of uncertainty
Pettersson, Per Iaccarino, Gianluca Nordstroem, Jan
2009-12-01
The Burgers' equation with uncertain initial and boundary conditions is investigated using a polynomial chaos (PC) expansion approach where the solution is represented as a truncated series of stochastic, orthogonal polynomials. The analysis of well-posedness for the system resulting after Galerkin projection is presented and follows the pattern of the corresponding deterministic Burgers equation. The numerical discretization is based on spatial derivative operators satisfying the summation by parts property and weak boundary conditions to ensure stability. Similarly to the deterministic case, the explicit time step for the hyperbolic stochastic problem is proportional to the inverse of the largest eigenvalue of the system matrix. The time step naturally decreases compared to the deterministic case since the spectral radius of the continuous problem grows with the number of polynomial chaos coefficients. An estimate of the eigenvalues is provided. A characteristic analysis of the truncated PC system is presented and gives a qualitative description of the development of the system over time for different initial and boundary conditions. It is shown that a precise statistical characterization of the input uncertainty is required and partial information, e.g. the expected values and the variance, are not sufficient to obtain a solution. An analytical solution is derived and the coefficients of the infinite PC expansion are shown to be smooth, while the corresponding coefficients of the truncated expansion are discontinuous.
Uncertainty analysis for determination of plutonium mass by neutron multiplicity counting
Carrillo, L.A.; Ensslin, N.; Krick, M.S.; Langner, D.G.; Rudy, C.R.
1998-12-31
This paper describes an uncertainty analysis carried out in association with the use of neutron multiplicity counting to collect data, and assign a total plutonium mass. During 1997, the Los Alamos Safeguards Science and Technology Group carried out careful calorimetry and neutron multiplicity certification measurements on two {sup 239}Pu metal foils used as reference standards at the Idaho National Environmental Engineering Laboratory (INEEL). The foils were measured using a five ring neutron multiplicity counter designed for neutron measurement control activities. This multiplicity counter is well characterized, and the detector parameters were reaffirmed before the measurements were made using several well-known Los Alamos standards. Then, the {sup 240}Pu effective mass of the foils was determined directly from the multiplicity analysis without a conventional calibration curve based on representative standards. Finally, the {sup 240}Pu effective mass fraction and the total plutonium mass was calculated using gamma ray isotopics. Errors from statistical data collection, background subtraction, cosmic ray interaction, dead time corrections, calibration constants, sample geometry, and sample position were carefully estimated and propagated. The authors describe these error sources, the final calculated relative error in the foil assay, and the comparison with very accurate calorimetry measurements.
The use of uncertainty analysis as a food waste estimation tool.
Langley, Joseph; Yoxall, Alaster; Manson, Graeme; Lewis, Walter; Waterhouse, Alison; Thelwall, David; Thelwall, Sarah; Parry, Andrew; Leech, Barbara
2009-05-01
Food waste going to landfill is a significant environmental issue with 33% of all food we buy simply being thrown away. Not only is this extremely wasteful but rotting food produces gases that are harmful to the environment and contributes to global warming. The UK government is committed to reducing the amount of household waste directly being disposed of at landfills by 10.1 million tonnes over 20 years from 2000 (Waste Composition Analysis: Guidance for Local Authorities, Defra, 2004). As part of this the Waste Reduction Action Programme was set up to fund and facilitate innovative solutions to reduce waste to landfill. Part of that process was to assess bids by solution providers with regards the effectiveness of the technologies that they were offering. This was found to be a non-trivial task, with multiple input parameters and large variations in data. Establishing which parameters have the greatest effect on food waste estimation was therefore essential in any decision-making process. However, the large number of unknowns, assumptions and parameters makes this understanding both difficult and time consuming. A branch of mathematics known as uncertainty analysis can be used to analyse these types of situations quickly and effectively and is easily adapted to understanding of food waste estimation. This paper outlines the techniques used to develop an internet-based decision-making tool and demonstrates the methodology used with simple case studies.
Managing uncertainty: a review of food system scenario analysis and modelling
Reilly, Michael; Willenbockel, Dirk
2010-01-01
Complex socio-ecological systems like the food system are unpredictable, especially to long-term horizons such as 2050. In order to manage this uncertainty, scenario analysis has been used in conjunction with food system models to explore plausible future outcomes. Food system scenarios use a diversity of scenario types and modelling approaches determined by the purpose of the exercise and by technical, methodological and epistemological constraints. Our case studies do not suggest Malthusian futures for a projected global population of 9 billion in 2050; but international trade will be a crucial determinant of outcomes; and the concept of sustainability across the dimensions of the food system has been inadequately explored so far. The impact of scenario analysis at a global scale could be strengthened with participatory processes involving key actors at other geographical scales. Food system models are valuable in managing existing knowledge on system behaviour and ensuring the credibility of qualitative stories but they are limited by current datasets for global crop production and trade, land use and hydrology. Climate change is likely to challenge the adaptive capacity of agricultural production and there are important knowledge gaps for modelling research to address. PMID:20713402
NASA Astrophysics Data System (ADS)
Sandric, I.; Petropoulos, Y.; Chitu, Z.; Mihai, B.
2012-04-01
The landslide hazard analysis models takes into consideration both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. The latter is expressed not as land use classes, as for example CORINE, but as leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index was derived from Landsat time series images, starting from 1984 and up to 2011. All the images available for the Panatau administrative unit in Buzau County, Romania, have been downloaded from http://earthexplorer.usgs.gov, including the images with cloud cover. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created
NASA Technical Reports Server (NTRS)
Brink, Jeffrey S.
2005-01-01
The space shuttle Aft Propulsion System (APS) pod requires precision alignment to be installed onto the orbiter deck. The Ground Support Equipment (GSE) used to perform this task cannot be manipulated along a single Cartesian axis without causing motion along the other Cartesian axes. As a result, manipulations required to achieve a desired motion are not intuitive. My study calculated the joint angles required to align the APS pod, using reverse kinematic analysis techniques. Knowledge of these joint angles will allow the ground support team to align the APS pod more safely and efficiently. An uncertainty analysis was also performed to estimate the accuracy associated with this approach and to determine whether any inexpensive modifications can be made to further improve accuracy.
Sonnenblick, R.; Henrion, M.
1997-12-31
The Tracking and Analysis Framework (TAF) team has created a model to estimate the economic and ecological effects of the 1990 Clean Air Act Amendment, Title TV. TAF has been coded in the Analytica modeling environment. Analytica allows model variables to be represented as ranges of values, defined as probability distributions. Using Monte Carlo techniques to propagate uncertain values through the model, model results can reflect the uncertainty in model inputs and construction. Rank correlations and elasticities can be computed to gauge model input parameter importance and model sensitivities. These tools allow modelers to view model results in the proper context: Are model results invariant with respect to model component uncertainty and variability? They also help pinpoint the uncertain model components which most affect model results, and may therefore merit additional research to reduce overall model uncertainty. In this paper, we describe the methods used to characterize uncertainty and variability in the TAF model. We also describe the related processes of uncertainty and sensitivity analysis in the TAF model, and relate the results of these processes back to the progressive refinement of the model itself. We use actual results from the Soils-Aquatics, Visibility, and Human Health modules to demonstrate the techniques described.
Park, Daeryong; Roesner, Larry A
2012-12-15
This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation meth