Planck 2013 results. III. LFI systematic uncertainties
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dick, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Kangaslahti, P.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Osborne, S.; Paci, F.; Pagano, L.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Ricciardi, S.; Riller, T.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
We present the current estimate of instrumental and systematic effect uncertainties for the Planck-Low Frequency Instrument relevant to the first release of the Planck cosmological results. We give an overview of the main effects and of the tools and methods applied to assess residuals in maps and power spectra. We also present an overall budget of known systematic effect uncertainties, which are dominated by sidelobe straylight pick-up and imperfect calibration. However, even these two effects are at least two orders of magnitude weaker than the cosmic microwave background fluctuations as measured in terms of the angular temperature power spectrum. A residual signal above the noise level is present in the multipole range ℓ < 20, most notably at 30 GHz, and is probably caused by residual Galactic straylight contamination. Current analysis aims to further reduce the level of spurious signals in the data and to improve the systematic effects modelling, in particular with respect to straylight and calibration uncertainties.
On LBNE neutrino flux systematic uncertainties
NASA Astrophysics Data System (ADS)
Lebrun, Paul L. G.; Hylen, James; Marchionni, Alberto; Fields, Laura; Bashyal, Amit; Park, Seongtae; Watson, Blake
2015-10-01
The systematic uncertainties in the neutrino flux of the Long-Baseline Neutrino Experiment, due to alignment uncertanties and tolerances of the neutrino beamline components, are estimated. In particular residual systematics are evaluated in the determination of the neutrino flux at the far detector, assuming that the experiment will be equipped with a near detector with the same target material of the far detector, thereby canceling most of the uncertainties from hadroproduction and neutrino cross sections. This calculation is based on a detailed Geant4-based model of the neutrino beam line that includes the target, two focusing horns, the decay pipe and ancillary items, such as shielding.
On LBNE neutrino flux systematic uncertainties
Lebrun, Paul L. G.; Hylen, James; Marchionni, Alberto; Fields, Laura; Bashyal, Amit; Park, Seongtae; Watson, Blake
2015-10-15
The systematic uncertainties in the neutrino flux of the Long-Baseline Neutrino Experiment, due to alignment uncertanties and tolerances of the neutrino beamline components, are estimated. In particular residual systematics are evaluated in the determination of the neutrino flux at the far detector, assuming that the experiment will be equipped with a near detector with the same target material of the far detector, thereby canceling most of the uncertainties from hadroproduction and neutrino cross sections. This calculation is based on a detailed Geant4-based model of the neutrino beam line that includes the target, two focusing horns, the decay pipe and ancillary items, such as shielding.
Quantifying systematic uncertainties in supernova cosmology
Nordin, Jakob; Goobar, Ariel; Joensson, Jakob E-mail: ariel@physto.se
2008-02-15
Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.
UCNA Systematic Uncertainties: Developments in Analysis and Method
NASA Astrophysics Data System (ADS)
Zeck, Bryan
2012-10-01
The UCNA experiment is an effort to measure the beta-decay asymmetry parameter A of the correlation between the electron momentum and the neutron spin, using bottled polarized ultracold neutrons in a homogenous 1 T magnetic field. Continued improvements in both analysis and method are helping to push the measurement uncertainty to the limits of the current statistical sensitivity (less than 0.4%). The implementation of thinner decay trap windows will be discussed, as will the use of a tagged beta particle calibration source to measure angle-dependent scattering effects and energy loss. Additionally, improvements in position reconstruction and polarization measurements using a new shutter system will be introduced. A full accounting of the current systematic uncertainties will be given.
ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES
Dolphin, Andrew E.
2012-05-20
In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.
Systematic uncertainties from halo asphericity in dark matter searches
Bernal, Nicolás; Forero-Romero, Jaime E.; Garani, Raghuveer; Palomares-Ruiz, Sergio E-mail: je.forero@uniandes.edu.co E-mail: sergio.palomares.ruiz@ific.uv.es
2014-09-01
Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm {sup 3}. Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Calculation of the detection limit in radiation measurements with systematic uncertainties
NASA Astrophysics Data System (ADS)
Kirkpatrick, J. M.; Russ, W.; Venkataraman, R.; Young, B. M.
2015-06-01
The detection limit (LD) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
The management of rivers for improving safety, shipping and environment requires conscious effort on the part of river managers. River engineers design hydraulic works to tackle various challenges, from increasing flow conveyance to ensuring minimal water depths for environmental flow and inland shipping. Last year saw the completion of such large scale river engineering in the 'Room for the River' programme for the Dutch Rhine River system, in which several dozen of human interventions were built to increase flood safety. Engineering works in rivers are not completed in isolation from society. Rather, their benefits - increased safety, landscaping beauty - and their disadvantages - expropriation, hindrance - directly affect inhabitants. Therefore river managers are required to carefully defend their plans. The effect of engineering works on river dynamics is being evaluated using hydraulic river models. Two-dimensional numerical models based on the shallow water equations provide the predictions necessary to make decisions on designs and future plans. However, like all environmental models, these predictions are subject to uncertainty. In recent years progress has been made in the identification of the main sources of uncertainty for hydraulic river models. Two of the most important sources are boundary conditions and hydraulic roughness (Warmink et al. 2013). The result of these sources of uncertainty is that the identification of single, deterministic prediction model is a non-trivial task. This is this is a well-understood problem in other fields as well - most notably hydrology - and known as equifinality. However, the particular case of human intervention modelling with hydraulic river models compounds the equifinality case. The model that provides the reference baseline situation is usually identified through calibration and afterwards modified for the engineering intervention. This results in two distinct models, the evaluation of which yields the effect of
A Systematic Procedure for Assigning Uncertainties to Data Evaluations
Younes, W
2007-02-20
In this report, an algorithm that automatically constructs an uncertainty band around any evaluation curve is described. Given an evaluation curve and a corresponding set of experimental data points with x and y error bars, the algorithm expands a symmetric region around the evaluation curve until 68.3% of a set of points, randomly sampled from the experimental data, fall within the region. For a given evaluation curve, the region expanded in this way represents, by definition, a one-standard-deviation interval about the evaluation that accounts for the experimental data. The algorithm is tested against several benchmarks, and is shown to be well-behaved, even when there are large gaps in the available experimental data. The performance of the algorithm is assessed quantitatively using the tools of statistical-inference theory.
McNamara, C; Mehegan, J; O'Mahony, C; Safford, B; Smith, B; Tennant, D; Buck, N; Ehrlich, V; Sardi, M; Haldemann, Y; Nordmann, H; Jasti, P R
2011-12-01
The feasibility of using a retailer fidelity card scheme to estimate food additive intake was investigated in an earlier study. Fidelity card survey information was combined with information provided by the retailer on levels of the food colour Sunset Yellow (E110) in the foods to estimate a daily exposure to the additive in the Swiss population. As with any dietary exposure method the fidelity card scheme is subject to uncertainties and in this paper the impact of uncertainties associated with input variables including the amounts of food purchased, the levels of E110 in food, the proportion of food purchased at the retailer, the rate of fidelity card usage, the proportion of foods consumed outside of the home and bodyweights and with systematic uncertainties was assessed using a qualitative, deterministic and probabilistic approach. An analysis of the sensitivity of the results to each of the probabilistic inputs was also undertaken. The analysis identified the key factors responsible for uncertainty within the model and demonstrated how the application of some simple probabilistic approaches can be used quantitatively to assess uncertainty. PMID:21995790
Systematic uncertainties in long-baseline neutrino oscillations for large θ₁₃
Coloma, Pilar; Huber, Patrick; Kopp, Joachim; Winter, Walter
2013-02-01
We study the physics potential of future long-baseline neutrino oscillation experiments at large θ₁₃, focusing especially on systematic uncertainties. We discuss superbeams, \\bbeams, and neutrino factories, and for the first time compare these experiments on an equal footing with respect to systematic errors. We explicitly simulate near detectors for all experiments, we use the same implementation of systematic uncertainties for all experiments, and we fully correlate the uncertainties among detectors, oscillation channels, and beam polarizations as appropriate. As our primary performance indicator, we use the achievable precision in the measurement of the CP violating phase $\\deltacp$. We find that a neutrino factory is the only instrument that can measure $\\deltacp$ with a precision similar to that of its quark sector counterpart. All neutrino beams operating at peak energies ≳2 GeV are quite robust with respect to systematic uncertainties, whereas especially \\bbeams and \\thk suffer from large cross section uncertainties in the quasi-elastic regime, combined with their inability to measure the appearance signal cross sections at the near detector. A noteworthy exception is the combination of a γ =100 \\bbeam with an \\spl-based superbeam, in which all relevant cross sections can be measured in a self-consistent way. This provides a performance, second only to the neutrino factory. For other superbeam experiments such as \\lbno and the setups studied in the context of the \\lbne reconfiguration effort, statistics turns out to be the bottleneck. In almost all cases, the near detector is not critical to control systematics since the combined fit of appearance and disappearance data already constrains the impact of systematics to be small provided that the three active flavor oscillation framework is valid.
Scolnic, D.; Riess, A.; Brout, D.; Rodney, S.; Rest, A.; Huber, M. E.; Tonry, J. L.; Foley, R. J.; Chornock, R.; Berger, E.; Soderberg, A. M.; Stubbs, C. W.; Kirshner, R. P.; Challis, P.; Czekala, I.; Drout, M.; Narayan, G.; Smartt, S. J.; Botticella, M. T.; Schlafly, E.; and others
2014-11-01
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w =−1.120{sub −0.206}{sup +0.360}(Stat){sub −0.291}{sup +0.269}(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H {sub 0} measurements, we find w=−1.166{sub −0.069}{sup +0.072} and Ω{sub m}=0.280{sub −0.012}{sup +0.013} (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = –1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: w{sub BAO+H0+SN+WMAP}=−1.124{sub −0.065}{sup +0.083}.
Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression
Gilbert, E.S.
1995-12-01
In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Köhlinger, F.; Hoekstra, H.; Eriksen, M.
2015-11-01
Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.
NASA Astrophysics Data System (ADS)
Cowan, Nicholas; Levy, Peter; Skiba, Ute
2016-04-01
The addition of reactive nitrogen to agricultural soils in the form of artificial fertilisers or animal waste is the largest global source of anthropogenic N2O emissions. Emission factors are commonly used to evaluate N2O emissions released after the application of nitrogen fertilisers on a global scale based on records of fertiliser use. Currently these emission factors are estimated primarily by a combination of results of experiments in which flux chamber methodology is used to estimate annual cumulative fluxes of N2O after nitrogen fertiliser applications on agricultural soils. The use of the eddy covariance method to measure N2O and estimate emission factors is also becoming more common in the flux community as modern rapid gas analyser instruments advance. The aim of the presentation is to highlight the weaknesses and potential systematic biases in current flux measurement methodology. This is important for GHG accounting and for accurate model calibration and verification. The growing interest in top-down / bottom-up comparisons of tall tower and conventional N2O flux measurements is also an area of research in which the uncertainties in flux measurements needs to be properly quantified. The large and unpredictable spatial and temporal variability of N2O fluxes from agricultural soils leads to a significant source of uncertainty in emission factor estimates. N2O flux measurements typically show poor relationships with explanatory co-variates. The true uncertainties in flux measurements at the plot scale are often difficult to propagate to field scale and the annual time scale. This results in very uncertain cumulative flux (emission factor) estimates. Cumulative fluxes estimated using flux chamber and eddy covariance methods can also differ significantly which complicates the matter further. In this presentation, we examine some effects that spatial and temporal variability of N2O fluxes can have on the estimation of emission factors and describe how
InSAR bias and uncertainty due to the systematic and stochastic tropospheric delay
NASA Astrophysics Data System (ADS)
Fattahi, Heresh; Amelung, Falk
2015-12-01
We quantify the bias and uncertainty of interferometric synthetic aperture radar (InSAR) displacement time series and their derivatives, the displacement velocities, by analyzing the systematic and stochastic components of the temporal variation of the tropospheric delay. The biases due to the systematic seasonal delay depend on the SAR acquisition times, whereas the uncertainties depend on the standard deviation of the random delay, the number of acquisitions, the total time span covered, and the covariance of the time series of the stochastic delay between a pixel and the reference. We study the contribution of the wet delay to the InSAR observations along the western India plate boundary using (i) Moderate Resolution Imaging Spectroradiometer precipitable water vapor, (ii) stratified tropospheric delay estimated from the ERA-I global atmospheric model, and (iii) seven Envisat InSAR swaths. Our analysis indicates that the amplitudes of the annual delay vary by up to ~10 cm in this region equivalent to a maximum displacement bias of ~24 cm in InSAR line of sight direction between two epochs (assuming Envisat IS6 beam mode). The stratified tropospheric delay correction mitigates this bias and reduces the scatter due to the stochastic delay. For ~7 years of Envisat acquisitions along the western India plate boundary, the uncertainty of the InSAR velocity field due to the residual stochastic wet delay after stratified tropospheric delay correction using the ERA-I model is in the order of ~2 mm/yr over 100 km and ~4 mm/yr over 400 km. We discuss the implication of the derived uncertainties on the full variance-covariance matrix of the InSAR data.
Single-Ion Atomic Clock with 3 ×10-18 Systematic Uncertainty
NASA Astrophysics Data System (ADS)
Huntemann, N.; Sanner, C.; Lipphardt, B.; Tamm, Chr.; Peik, E.
2016-02-01
We experimentally investigate an optical frequency standard based on the 2S1/2 (F =0 )→ 2F7/2 (F =3 ) electric octupole (E 3 ) transition of a single trapped 171Yb+ ion. For the spectroscopy of this strongly forbidden transition, we utilize a Ramsey-type excitation scheme that provides immunity to probe-induced frequency shifts. The cancellation of these shifts is controlled by interleaved single-pulse Rabi spectroscopy, which reduces the related relative frequency uncertainty to 1.1 ×10-18. To determine the frequency shift due to thermal radiation emitted by the ion's environment, we measure the static scalar differential polarizability of the E 3 transition as 0.888 (16 )×10-40 J m2/V2 and a dynamic correction η (300 K )=-0.0015 (7 ) . This reduces the uncertainty due to thermal radiation to 1.8 ×10-18. The residual motion of the ion yields the largest contribution (2.1 ×10-18 ) to the total systematic relative uncertainty of the clock of 3.2 ×10-18.
NASA Astrophysics Data System (ADS)
Lacerda, Márcio J.; Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2016-04-01
This paper presents a general framework to cope with full-order ? linear parameter-varying (LPV) filter design subject to inexactly measured parameters. The main novelty is the ability of handling additive and multiplicative uncertainties in the measurements, for both continuous and discrete-time LPV systems, in a unified approach. By conveniently modelling scheduling parameters and uncertainties affecting the measurements, the ? filter design problem can be expressed in terms of robust matrix inequalities that become linear when two scalar parameters are fixed. Therefore, the proposed conditions can be efficiently solved through linear matrix inequality relaxations based on polynomial solutions. Numerical examples are presented to illustrate the improved efficiency of the proposed approach when compared to other methods and, more important, its capability to deal with scenarios where the available strategies in the literature cannot be used.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
NASA Astrophysics Data System (ADS)
Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong
2016-05-01
Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.
Laperrière, Hélène
2007-01-01
Several years of professional nursing practices, while living in the poorest neighbourhoods in the outlying areas of Brazil's Amazon region, have led the author to develop a better understanding of marginalized populations. Providing care to people with leprosy and sex workers in riverside communities has taken place in conditions of uncertainty, insecurity, unpredictability and institutional violence. The question raised is how we can develop community health nursing practices in this context. A systematization of personal experiences based on popular education is used and analyzed as a way of learning by obtaining scientific knowledge through critical analysis of field practices. Ties of solidarity and belonging developed in informal, mutual-help action groups are promising avenues for research and the development of knowledge in health promotion, prevention and community care and a necessary contribution to national public health programmers. PMID:17934576
Systematic evaluation of an atomic clock at 2 × 10−18 total uncertainty
Nicholson, T.L.; Campbell, S.L.; Hutson, R.B.; Marti, G.E.; Bloom, B.J.; McNally, R.L.; Zhang, W.; Barrett, M.D.; Safronova, M.S.; Strouse, G.F.; Tew, W.L.; Ye, J.
2015-01-01
The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our 87Sr optical lattice clock now achieves fractional stability of 2.2 × 10−16 at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10−18 in fractional frequency units. PMID:25898253
Systematic evaluation of an atomic clock at 2 × 10(-18) total uncertainty.
Nicholson, T L; Campbell, S L; Hutson, R B; Marti, G E; Bloom, B J; McNally, R L; Zhang, W; Barrett, M D; Safronova, M S; Strouse, G F; Tew, W L; Ye, J
2015-01-01
The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our (87)Sr optical lattice clock now achieves fractional stability of 2.2 × 10(-16) at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10(-18) in fractional frequency units. PMID:25898253
Systematic Uncertainties in Characterizing Cluster Outskirts: The Case of Abell 133
NASA Astrophysics Data System (ADS)
Paine, Jennie; Ogrean, Georgiana A.; Nulsen, Paul; Farrah, Duncan
2016-01-01
The outskirts of galaxy clusters have low surface brightness compared to the X-ray background, making accurate background subtraction particularly important for analyzing cluster spectra out to and beyond the virial radius. We analyze the thermodynamic properties of the intracluster medium (ICM) of Abell 133 and assess the extent to which uncertainties on background subtraction affect measured quantities. We implement two methods of analyzing the ICM spectra: one in which the blank-sky background is subtracted, and another in which the sky background is modeled. We find that the two methods are consistent within the 90% confidence ranges. We were able to measure the thermodynamic properties of the cluster up to R500. Even at R500, the systematic uncertainties associated with the sky background in the direction of A133 are small, despite the ICM signal constituting only ~25% of the total signal. This work was supported in part by the NSF REU and DoD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution. GAO acknowledges support by NASA through a Hubble Fellowship grant HST-HF2-51345.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2011-12-01
This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.
NASA Astrophysics Data System (ADS)
Moore, Joseph Andrew
2011-12-01
External-beam radiotherapy is one of the primary methods for treating cancer. Typically a radiotherapy treatment course consists of radiation delivered to the patient in multiple daily treatment fractions over 6--8 weeks. Each fraction requires the patient to be aligned with the image acquired before the treatment course used in treatment planning. Unfortunately, patient alignment is not perfect and results in residual errors in patient setup. The standard technique for dealing with errors in patient setup is to expand the volume of the target by some margin to ensure the target receives the planned dose in the presence of setup errors. This work develops an alternative to margins for accommodating setup errors in the treatment planning process by directly including patient setup uncertainty in IMRT plan optimization. This probabilistic treatment planning (PTP) operates directly on the planning structure and develops a dose distribution robust to variations in the patient position. Two methods are presented. The first method includes only random setup uncertainty in the planning process by convolving the fluence of each beam with a Gaussian model of the distribution of random setup errors. The second method builds upon this by adding systematic uncertainty to optimization by way of a joint optimization over multiple probable patient positions. To assess the benefit of PTP methods, a PTP plan and a margin-based plan are developed for each of the 28 patients used in this study. Comparisons of plans show that PTP plans generally reduce the dose to normal tissues while maintaining a similar dose to the target structure when compared to margin-based plans. Physician assessment indicates that PTP plans are generally preferred over margin-based plans. PTP methods shows potential for improving patient outcome due to reduced complications associated with treatment.
Study of tracking efficiency and its systematic uncertainty from J/ψ → pp̅π+π- at BESIII
NASA Astrophysics Data System (ADS)
Wen-Long, Yuan; Xiao-Cong, Ai; Xiao-Bin, Ji; Shen-Jian, Chen; Yao, Zhang; Ling-Hui, Wu; Liang-Liang, Wang; Ye, Yuan
2016-02-01
Based on J/ψ events collected with the BESIII detector, with corresponding Monte Carlo samples, the tracking efficiency and its systematic uncertainty are studied using a control sample of J/ψ → pp̅π+π-. Validation methods and different factors influencing the tracking efficiency are presented in detail. The tracking efficiency and its systematic uncertainty for protons and pions with the transverse momentum and polar angle dependence are also discussed. Supported by Joint Funds of National Natural Science Foundation of China (U1232201), National Natural Science Foundation of China (11275210, 11205182, 11205184) and National Key Basic Research Program of China (2015CB856700)
NASA Astrophysics Data System (ADS)
Narayan, Amrendra
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (~1GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a
Narayan, Amrendra
2015-05-01
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a
An additional uncertainty of the throughput generated by the constant pressure gas flowmeter
NASA Astrophysics Data System (ADS)
Peksa, L.; Gronych, T.; Řepa, P.; Wild, J.; Tesař, J.; Pražák, D.; Krajíček, Z.; Vičar, M.
2008-03-01
The lower range limit of constant pressure gas flowmeters is about 10-8 Pa×m3/s. Detrimental gas throughputs caused by leaks and gassing from surfaces prevent from its decrease. Even if the flowmeter is entirely vacuum tight the throughput caused by the outgassing from surfaces can be sufficiently reduced only by pumping at elevated temperature. It can be performed with the flowmeters using directly driven bellows or diaphragm bellows in the volume displacers. Despite it, the lower range limit can hardly be decreased more than several ten times with up to now known designs. An additional uncertainty caused by the difference in pressure at the initial and final instant of measurement will increase at generating small throughputs to the extent that it will kill the measurement.
NASA Astrophysics Data System (ADS)
Cochran, J. R.; Tinto, K. J.; Elieff, S. H.; Bell, R. E.
2011-12-01
Airborne geophysical surveys in West Antarctica and Greenland carried out during Operation IceBridge (OIB) utilized the Sander Geophysics AIRGrav gravimeter, which collects high quality data during low-altitude, draped flights. This data has been used to determine bathymetry beneath ice shelves and floating ice tongues (e.g., Tinto et al, 2010, Cochran et al, 2010). This paper systematically investigates uncertainties arising from survey, instrumental and geologic constraints in this type of study and the resulting resolution of the bathymetry model. Gravity line data is low-pass filtered with time-based filters to remove high frequency noise. The spatial filter length is dependent on aircraft speed. For parameters used in OIB (70-140 s filters and 270-290 knots), spatial filter half-wavelengths are ~5-10 km. The half-wavelength does not define a lower limit to the width of feature that can be detected, but shorter wavelength features may appear wider with a lower amplitude. Resolution can be improved either by using a shorter filter or by flying slower. Both involve tradeoffs; a shorter filter allows more noise and slower speeds result in less coverage. These filters are applied along tracks, rather than in a region surrounding a measurement. In areas of large gravity relief, tracks in different directions can sample a very different range of gravity values within the length of the filter. We show that this can lead to crossover mismatches of >5 mGal, complicating interpretation. For dense surveys, gridding the data and then sampling the grid at the measurement points can minimize this effect. Resolution is also affected by the elevation of survey flights. For a distributed mass, the gravity amplitude decreases with distance and short-wavelength components attenuate faster. This is not a serious issue for OIB, which flew draped flights <500 m above the ice surface, but is a serious factor for gravimeters that require a constant elevation above the highest
Amoush, Ahmad; Abdel-Wahab, May; Abazeed, Mohamed; Xia, Ping
2015-01-01
The purpose of this study was to quantify the systematic uncertainties resulting from using free breathing computed tomography (FBCT) as a reference image for image-guided radiation therapy (IGRT) for patients with pancreatic tumors, and to quantify the associated dosimetric impact that resulted from using FBCT as reference for IGRT. Fifteen patients with implanted fiducial markers were selected for this study. For each patient, a FBCT and an average intensity projection computed tomography (AIP) created from four-dimensional computed tomography (4D CT) were acquired at the simulation. The treatment plan was created based on the FBCT. Seventy-five weekly kilovoltage (kV) cone-beam computed tomography (CBCT) images (five for each patient) were selected for this study. Bony alignment without rotation correction was performed 1) between the FBCT and CBCT, 2) between the AIP and CBCT, and 3) between the AIP and FBCT. The contours of the fiducials from the FBCT and AIP were transferred to the corresponding CBCT and were compared. Among the 75 CBCTs, 20 that had > 3 mm differences in centers of mass (COMs) in any directions between the FBCT and AIP were chosen for further dosimetric analysis. These COM discrepancies were converted into isocenter shifts in the corresponding planning FBCT, and dose was recalculated and compared to the initial FBCT plans. For the 75 CBCTs studied, the mean absolute differences in the COMs of the fiducial markers between the FBCT and CBCTs were 3.3 mm ± 2.5 mm, 3.5 mm ± 2.4 mm, and 5.8 mm ± 4.4 mm in the right-left (RL), anterior-posterior (AP), and superior-inferior (SI) directions, respectively. Between the AIP and CBCTs, the mean absolute differences were 3.2 mm ± 2.2mm, 3.3 mm ± 2.3 mm, and 6.3 mm ± 5.4 mm. The absolute mean discrepancies in these COMs shifts between FBCT/CBCT and AIP/CBCT were 1.1 mm ± 0.8 mm, 1.3 mm ± 0.9 mm, and 3.3 mm ± 2.6 mm in RL, AP, and SI, respectively. This represented a potential systematic error
NASA Astrophysics Data System (ADS)
Galli, Silvia; Slatyer, Tracy R.; Valdes, Marcos; Iocco, Fabio
2013-09-01
Anisotropies of the cosmic microwave background (CMB) have proven to be a very powerful tool to constrain dark matter annihilation at the epoch of recombination. However, CMB constraints are currently derived using a number of reasonable but yet untested assumptions that could potentially lead to a misestimation of the true bounds (or any reconstructed signal). In this paper we examine the potential impact of these systematic effects. In particular, we separately study the propagation of the secondary particles produced by annihilation in two energy regimes: first following the shower from the initial particle energy to the keV scale, and then tracking the resulting secondary particles from this scale to the absorption of their energy as heat, ionization, or excitation of the medium. We improve both the high- and low-energy parts of the calculation, in particular finding that our more accurate treatment of losses to sub-10.2 eV photons produced by scattering of high-energy electrons weakens the constraints on particular dark matter annihilation models by up to a factor of 2. On the other hand, we find that the uncertainties we examine for the low-energy propagation do not significantly affect the results for current and upcoming CMB data. We include the evaluation of the precise amount of excitation energy, in the form of Lyman-α photons, produced by the propagation of the shower, and examine the effects of varying the helium fraction and helium ionization fraction. In the recent literature, simple approximations for the fraction of energy absorbed in different channels have often been used to derive CMB constraints: we assess the impact of using accurate vs approximate energy fractions. Finally we check that the choice of recombination code (between RECFAST v1.5 and COSMOREC), to calculate the evolution of the free electron fraction in the presence of dark matter annihilation, introduces negligible differences.
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transversemore » momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light
Sensory uncertainty leads to systematic misperception of the direction of motion in depth.
Fulvio, Jacqueline M; Rosen, Monica L; Rokers, Bas
2015-07-01
Although we have made major advances in understanding motion perception based on the processing of lateral (2D) motion signals on computer displays, the majority of motion in the real (3D) world occurs outside of the plane of fixation, and motion directly toward or away from observers has particular behavioral relevance. Previous work has reported a systematic lateral bias in the perception of 3D motion, such that an object on a collision course with an observer's head is frequently judged to miss it, with obvious negative consequences. To better understand this bias, we systematically investigated the accuracy of 3D motion perception while manipulating sensory noise by varying the contrast of a moving target and its position in depth relative to fixation. Inconsistent with previous work, we found little bias under low sensory noise conditions. With increased sensory noise, however, we revealed a novel perceptual phenomenon: observers demonstrated a surprising tendency to confuse the direction of motion-in-depth, such that approaching objects were reported to be receding and vice versa. Subsequent analysis revealed that the lateral and motion-in-depth components of observers' reports are similarly affected, but that the effects on the motion-in-depth component (i.e., the motion-in-depth confusions) are much more apparent than those on the lateral component. In addition to revealing this novel visual phenomenon, these results shed new light on errors that can occur in motion perception and provide a basis for continued development of motion perception models. Finally, our findings suggest methods to evaluate the effectiveness of 3D visualization environments, such as 3D movies and virtual reality devices. PMID:25828462
Systematic uncertainties in RF-based measurement of superconducting cavity quality factors
NASA Astrophysics Data System (ADS)
Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.
2016-09-01
Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.
Systematic uncertainties in RF-based measurement of superconducting cavity quality factors
Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.
2016-05-10
Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Here, instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.
Juhasz, A.; Henning, Th.; Bouwman, J.; Dullemond, C. P.; Pascucci, I.; Apai, D.
2009-04-20
The spectral region around 10 {mu}m, showing prominent emission bands from various dust species is commonly used for the evaluation of the chemical composition of protoplanetary dust. Different methods of analysis have been proposed for this purpose, but so far, no comparative test has been performed to test the validity of their assumptions. In this paper, we evaluate how good the various methods are in deriving the chemical composition of dust grains from infrared spectroscopy. Synthetic spectra of disk models with different geometries and central sources were calculated, using a two-dimensional radiative transfer code. These spectra were then fitted in a blind test by four spectral decomposition methods. We studied the effect of disk structure (flared versus flat), inclination angle, size of an inner disk hole, and stellar luminosity on the fitted chemical composition. Our results show that the dust parameters obtained by all methods deviate systematically from the input data of the synthetic spectra. The dust composition fitted by the new two-layer temperature distribution method, described in this paper, differs the least from the input dust composition and the results show the weakest systematic effects. The reason for the deviations of the results given by the previously used methods lies in their simplifying assumptions. Due to the radial extent of the 10 {mu}m emitting region there is dust at different temperatures contributing to the flux in the silicate feature. Therefore, the assumption of a single averaged grain temperature can be a strong limitation of the previously used methods. The continuum below the feature can consist of multiple components (e.g., star, inner rim, and disk midplane), which cannot simply be described by a Planck function at a single temperature. In addition, the optically thin emission of 'featureless' grains (e.g., carbon in the considered wavelength range) produces a degeneracy in the models with the optically thick emission of
Reducing model uncertainty effects in flexible manipulators through the addition of passive damping
NASA Technical Reports Server (NTRS)
Alberts, T. E.
1987-01-01
An important issue in the control of practical systems is the effect of model uncertainty on closed loop performance. This is of particular concern when flexible structures are to be controlled, due to the fact that states associated with higher frequency vibration modes are truncated in order to make the control problem tractable. Digital simulations of a single-link manipulator system are employed to demonstrate that passive damping added to the flexible member reduces adverse effects associated with model uncertainty. A controller was designed based on a model including only one flexible mode. This controller was applied to larger order systems to evaluate the effects of modal truncation. Simulations using a Linear Quadratic Regulator (LQR) design assuming full state feedback illustrate the effect of control spillover. Simulations of a system using output feedback illustrate the destabilizing effect of observation spillover. The simulations reveal that the system with passive damping is less susceptible to these effects than the untreated case.
Enhanced flux pinning in MOCVD-YBCO films through Zr-additions:Systematic feasibility studies
Aytug, Tolga; Paranthaman, Mariappan Parans; Specht, Eliot D; Kim, Kyunghoon; Zhang, Yifei; Cantoni, Claudia; Zuev, Yuri L; Goyal, Amit; Christen, David K; Maroni, Victor A.
2009-01-01
Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.
Enhanced flux pinning in MOCVD-YBCO films through Zr additions : systematic feasibility studies.
Aytug, T.; Paranthaman, M.; Specht, E. D.; Zhang, Y.; Kim, K.; Zuev, Y. L.; Cantoni, C.; Goyal, A.; Christen, D. K.; Maroni, V. A.; Chen, Y.; Selvamanickam, V.; ORNL; SuperPower, Inc.
2010-01-01
Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.
Trapped ion 88Sr+ optical clock systematic uncertainties - AC Stark shift determination
NASA Astrophysics Data System (ADS)
Barwood, GP; Huang, G.; King, SA; Klein, HA; Gill, P.
2016-06-01
A recent comparison between two trapped-ion 88Sr+ optical clocks at the UK. National Physical Laboratory demonstrated agreement to 4 parts in 1017. One of the uncertainty contributions to the optical clock absolute frequency arises from the blackbody radiation shift which in turn depends on uncertainty in the knowledge of the differential polarisability between the two clocks states. Whilst a recent NRC measurement has determined the DC differential polarisability to high accuracy, there has been no experimental verification to date of the dynamic correction to the DC Stark shift. We report a measurement of the scalar AC Stark shift at 1064 nm with measurements planned at other wavelengths. Our preliminary result using a fibre laser at 1064 nm agrees with calculated values to within ∼3%.
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Ben Zvi, S. Y.; Bergman, D. R.; Blake, S. A.; Brusova, O.; Burt, G. W.; Cao, Z.; Connolly, B. C.; Deng, W.; Fedorova, Y.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Hughes, G. A.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; Zhang, X.
2007-06-01
We have studied several sources of systematic uncertainty in calculating the aperture of the High Resolution Fly’s Eye experiment (HiRes) in monocular mode, primarily as they affect the HiRes-II site. The energy dependent aperture is determined with detailed Monte Carlo simulations of the air showers and the detector response. We have studied the effects of changes to the input energy spectrum and composition used in the simulation. A realistic shape of the input spectrum is used in our analysis in order to avoid biases in the aperture estimate due to the limited detector resolution. We have examined the effect of exchanging our input spectrum with a simple E-3 power law in the “ankle” region. Uncertainties in the input composition are shown to be significant for energies below ˜1018 eV for data from the HiRes-II detector. Another source of uncertainties is the choice of the hadronic interaction model in the air shower generator. We compare the aperture estimate for two different models: QGSJet01 and SIBYLL 2.1. We also describe the implications of employing an atmospheric database with hourly measurements of the aerosol component, instead of using an average as has been used in our previously published measurements of the monocular spectra.
GARNATJE, TERESA; GARCIA, SÒNIA; VILATERSANA, ROSER; VALLÈS, JOAN
2006-01-01
• Background and Aims Plant genome size is an important biological characteristic, with relationships to systematics, ecology and distribution. Currently, there is no information regarding nuclear DNA content for any Carthamus species. In addition to improving the knowledge base, this research focuses on interspecific variation and its implications for the infrageneric classification of this genus. Genome size variation in the process of allopolyploid formation is also addressed. • Methods Nuclear DNA samples from 34 populations of 16 species of the genus Carthamus were assessed by flow cytometry using propidium iodide. • Key Results The 2C values ranged from 2·26 pg for C. leucocaulos to 7·46 pg for C. turkestanicus, and monoploid genome size (1Cx-value) ranged from 1·13 pg in C. leucocaulos to 1·53 pg in C. alexandrinus. Mean genome sizes differed significantly, based on sectional classification. Both allopolyploid species (C. creticus and C. turkestanicus) exhibited nuclear DNA contents in accordance with the sum of the putative parental C-values (in one case with a slight reduction, frequent in polyploids), supporting their hybrid origin. • Conclusions Genome size represents a useful tool in elucidating systematic relationships between closely related species. A considerable reduction in monoploid genome size, possibly due to the hybrid formation, is also reported within these taxa. PMID:16390843
No additional value of fusion techniques on anterior discectomy for neck pain: a systematic review.
van Middelkoop, Marienke; Rubinstein, Sidney M; Ostelo, Raymond; van Tulder, Maurits W; Peul, Wilco; Koes, Bart W; Verhagen, Arianne P
2012-11-01
We aimed to assess the effects of additional fusion on surgical interventions to the cervical spine for patients with neck pain with or without radiculopathy or myelopathy by performing a systematic review. The search strategy outlined by the Cochrane Back Review Group (CBRG) was followed. The primary search was conducted in MEDLINE, EMBASE, CINAHL, CENTRAL and PEDro up to June 2011. Only randomised, controlled trials of adults with neck pain that evaluated at least one clinically relevant primary outcome measure (pain, functional status, recovery) were included. Two authors independently assessed the risk of bias by using the criteria recommended by the CBRG and extracted the data. Data were pooled using a random effects model. The quality of the evidence was rated using the GRADE method. In total, 10 randomised, controlled trials were identified comparing additional fusion upon anterior decompression techniques, including 2 studies with a low risk of bias. Results revealed no clinically relevant differences in recovery: the pooled risk difference in the short-term follow-up was -0.06 (95% confidence interval -0.22 to 0.10) and -0.07 (95% confidence interval -0.14 to 0.00) in the long-term follow-up. Pooled risk differences for pain and return to work all demonstrated no differences. There is no additional benefit of fusion techniques applied within an anterior discectomy procedure on pain, recovery and return to work. PMID:22818181
A new approach to systematic uncertainties and self-consistency in helium abundance determinations
Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu
2010-05-01
Tests of big bang nucleosynthesis and early universe cosmology require precision measurements for helium abundance determinations. However, efforts to determine the primordial helium abundance via observations of metal poor H II regions have been limited by significant uncertainties (compared with the value inferred from BBN theory using the CMB determined value of the baryon density). This work builds upon previous work by providing an updated and extended program in evaluating these uncertainties. Procedural consistency is achieved by integrating the hydrogen based reddening correction with the helium based abundance calculation, i.e., all physical parameters are solved for simultaneously. We include new atomic data for helium recombination and collisional emission based upon recent work by Porter \\etal and wavelength dependent corrections to underlying absorption are investigated. The set of physical parameters has been expanded here to include the effects of neutral hydrogen collisional emission. It is noted that Hγ and Hδ allow better isolation of the collisional effects from the reddening. Because of a degeneracy between the solutions for density and temperature, the precision of the helium abundance determinations is limited. Also, at lower temperatures (T ∼< 13,000 K) the neutral hydrogen fraction is poorly constrained resulting in a larger uncertainty in the helium abundances. Thus, the derived errors on the helium abundances for individual objects are larger than those typical of previous studies. Seven previously analyzed, ''high quality'' H II region spectra are used for a primordial helium abundance determination. The updated emissivities and neutral hydrogen correction generally raise the abundance. From a regression to zero metallicity, we find Y{sub p} as 0.2561 ± 0.0108, in broad agreement with the WMAP result. Alternatively, a simple average of the data yields Y{sub p} 0.2566 ± 0.0028. Tests with synthetic data show a potential for distinct
Kulkarni, Sonali P; Shah, Kavita R; Sarma, Karthik V; Mahajan, Anish P
2013-06-01
Despite the HIV "test-and-treat" strategy's promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy's success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation. PMID:23597344
NASA Astrophysics Data System (ADS)
Reed, P. M.; Kollat, J. B.
2012-01-01
This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.
M dwarf metallicities and giant planet occurrence: Ironing out uncertainties and systematics
Gaidos, Eric; Mann, Andrew W.
2014-08-10
Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are –0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ∼1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.
NASA Astrophysics Data System (ADS)
Salaris, M.; Cassisi, S.
2007-01-01
Context: Age and metallicity estimates for extragalactic globular clusters, from integrated colour-colour diagrams, are examined. Aims: We investigate biases in cluster ages and [Fe/H] estimated from the (V-K)-(V-I) diagram, arising from inconsistent Horizontal Branch morphology, metal mixture, treatment of core convection between observed clusters and the theoretical colour grid employed for age and metallicity determinations. We also study the role played by statistical fluctuations of the observed colours, caused by the low total mass of typical globulars. Methods: Synthetic samples of globular cluster systems are created, by means of Monte-Carlo techniques. Each sample accounts for a different possible source of bias, among the ones addressed in this investigation. Cumulative age and [Fe/H] distributions are then retrieved by comparisons with a reference theoretical colour-colour grid, and analyzed. Results: Horizontal Branch morphology is potentially the largest source of uncertainty. A single-age system harbouring a large fraction of clusters with an HB morphology systematically bluer than the one accounted for in the theoretical colour grid, can simulate a bimodal population with an age difference as large as ~8 Gyr. When only the redder clusters are considered, this uncertainty is almost negligible, unless there is an extreme mass loss along the Red Giant Branch phase. The metal mixture affects mainly the redder clusters; the effect of colour fluctuations becomes negligible for the redder clusters, or when the integrated MV is brighter than ~-8.5 mag. The treatment of core convection is relevant for ages below ~4 Gyr. The retrieved cumulative [Fe/H] distributions are overall only mildly affected. Colour fluctuations and convective core extension have the largest effect. When 1σ photometric errors reach 0.10 mag, all biases found in our analysis are erased, and bimodal age populations with age differences of up to ~8 Gyr go undetected. The use of both (U
NASA Astrophysics Data System (ADS)
Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.
2015-12-01
Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
NASA Astrophysics Data System (ADS)
Anderson, Richard I.
2014-06-01
Context. Classical Cepheids are crucial calibrators of the extragalactic distance scale. The Baade-Wesselink technique can be used to calibrate Cepheid distances using Cepheids in the Galaxy and the Magellanic Clouds. Aims: I report the discovery of modulations in radial velocity (RV) curves of four Galactic classical Cepheids and investigate their impact as a systematic uncertainty for Baade-Wesselink distances. Methods: Highly precise Doppler measurements were obtained using the Coralie high-resolution spectrograph since 2011. Particular care was taken to sample all phase points in order to very accurately trace the RV curve during multiple epochs and to search for differences in linear radius variations derived from observations obtained at different epochs. Different timescales are sampled, ranging from cycle-to-cycle to months and years. Results: The unprecedented combination of excellent phase coverage obtained during multiple epochs and high precision enabled the discovery of significant modulation in the RV curves of the short-period s-Cepheids QZ Normae and V335 Puppis, as well as the long-period fundamental mode Cepheids ℓ Carinae and RS Puppis. The modulations manifest as shape and amplitude variations that vary smoothly on timescales of years for short-period Cepheids and from one pulsation cycle to the next in the long-period Cepheids. The order of magnitude of the effect ranges from several hundred m s-1 to a few km s-1. The resulting difference among linear radius variations derived using data from different epochs can lead to systematic errors of up to 15% for Baade-Wesselink-type distances, if the employed angular and linear radius variations are not determined contemporaneously. Conclusions: The different natures of the Cepheids exhibiting modulation in their RV curves suggests that this phenomenon is common. The observational baseline is not yet sufficient to conclude whether these modulations are periodic. To ensure the accuracy of Baade
Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro
2015-01-01
Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253
Keeling, V; Jin, H; Hossain, S; Ahmad, S; Ali, I
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) were registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0
2014-01-01
Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients
NASA Astrophysics Data System (ADS)
Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.
2015-08-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2009-12-01
We deal with the attempts to measure the Lense-Thirring effect with the Satellite Laser Ranging (SLR) technique applied to the existing LAGEOS and LAGEOS II terrestrial satellites and to the recently approved LARES spacecraft. According to general relativity, a central spinning body of mass M and angular momentum S like the Earth generates a gravitomagnetic field which induces small secular precessions of the orbit of a test particle geodesically moving around it. Extracting this signature from the data is a demanding task because of many classical orbital perturbations having the same pattern as the gravitomagnetic one, like those due to the centrifugal oblateness of the Earth which represents a major source of systematic bias. The first issue addressed here is: are the so far published evaluations of the systematic uncertainty induced by the bad knowledge of the even zonal harmonic coefficients J ℓ of the multipolar expansion of the Earth’s geopotential reliable and realistic? Our answer is negative. Indeed, if the differences Δ J ℓ among the even zonals estimated in different Earth’s gravity field global solutions from the dedicated GRACE mission are assumed for the uncertainties δ J ℓ instead of using their covariance sigmas σ_{J_{ell}} , it turns out that the systematic uncertainty δ μ in the Lense-Thirring test with the nodes Ω of LAGEOS and LAGEOS II may be up to 3 to 4 times larger than in the evaluations so far published (5-10%) based on the use of the sigmas of one model at a time separately. The second issue consists of the possibility of using a different approach in extracting the relativistic signature of interest from the LAGEOS-type data. The third issue is the possibility of reaching a realistic total accuracy of 1% with LAGEOS, LAGEOS II and LARES, which should be launched in November 2009 with a VEGA rocket. While LAGEOS and LAGEOS II fly at altitudes of about 6000 km, LARES will be likely placed at an altitude of 1450 km. Thus
Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele
2016-01-01
AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933
van Ochten, John; Luijsterburg, Pim A J; van Middelkoop, Marienke; Koes, Bart W; Bierma-Zeinstra, Sita M A
2010-01-01
Objective To summarise the effectiveness of adding supervised exercises to conventional treatment compared with conventional treatment alone in patients with acute lateral ankle sprains. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, Cinahl, and reference screening. Study selection Included studies were randomised controlled trials, quasi-randomised controlled trials, or clinical trials. Patients were adolescents or adults with an acute lateral ankle sprain. The treatment options were conventional treatment alone or conventional treatment combined with supervised exercises. Two reviewers independently assessed the risk of bias, and one reviewer extracted data. Because of clinical heterogeneity we analysed the data using a best evidence synthesis. Follow-up was classified as short term (up to two weeks), intermediate (two weeks to three months), and long term (more than three months). Results 11 studies were included. There was limited to moderate evidence to suggest that the addition of supervised exercises to conventional treatment leads to faster and better recovery and a faster return to sport at short term follow-up than conventional treatment alone. In specific populations (athletes, soldiers, and patients with severe injuries) this evidence was restricted to a faster return to work and sport only. There was no strong evidence of effectiveness for any of the outcome measures. Most of the included studies had a high risk of bias, with few having adequate statistical power to detect clinically relevant differences. Conclusion Additional supervised exercises compared with conventional treatment alone have some benefit for recovery and return to sport in patients with ankle sprain, though the evidence is limited or moderate and many studies are subject to bias. PMID:20978065
Systematics and limit calculations
Fisher, Wade; /Fermilab
2006-12-01
This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.
Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat
2015-01-01
Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395
Moore, Nicholas; Arnaud, Mickael; Robinson, Philip; Raschi, Emanuel; De Ponti, Fabrizio; Bégaud, Bernard; Pariente, Antoine
2016-01-01
Objective To quantify the risk of hypoglycaemia associated with the concomitant use of dipeptidyl peptidase-4 (DPP-4) inhibitors and sulphonylureas compared with placebo and sulphonylureas. Design Systematic review and meta-analysis. Data sources Medline, ISI Web of Science, SCOPUS, Cochrane Central Register of Controlled Trials, and clinicaltrial.gov were searched without any language restriction. Study selection Placebo controlled randomised trials comprising at least 50 participants with type 2 diabetes treated with DPP-4 inhibitors and sulphonylureas. Review methods Risk of bias in each trial was assessed using the Cochrane Collaboration tool. The risk ratio of hypoglycaemia with 95% confidence intervals was computed for each study and then pooled using fixed effect models (Mantel Haenszel method) or random effect models, when appropriate. Subgroup analyses were also performed (eg, dose of DPP-4 inhibitors). The number needed to harm (NNH) was estimated according to treatment duration. Results 10 studies were included, representing a total of 6546 participants (4020 received DPP-4 inhibitors plus sulphonylureas, 2526 placebo plus sulphonylureas). The risk ratio of hypoglycaemia was 1.52 (95% confidence interval 1.29 to 1.80). The NNH was 17 (95% confidence interval 11 to 30) for a treatment duration of six months or less, 15 (9 to 26) for 6.1 to 12 months, and 8 (5 to 15) for more than one year. In subgroup analysis, no difference was found between full and low doses of DPP-4 inhibitors: the risk ratio related to full dose DPP-4 inhibitors was 1.66 (1.34 to 2.06), whereas the increased risk ratio related to low dose DPP-4 inhibitors did not reach statistical significance (1.33, 0.92 to 1.94). Conclusions Addition of DPP-4 inhibitors to sulphonylurea to treat people with type 2 diabetes is associated with a 50% increased risk of hypoglycaemia and to one excess case of hypoglycaemia for every 17 patients in the first six months of treatment. This
Network planning under uncertainties
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Measurement Uncertainty and Probability
NASA Astrophysics Data System (ADS)
Willink, Robin
2013-02-01
Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A
2014-01-01
Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where
Kim, Mee J.; Findlay, Gregory M.; Martin, Beth; Zhao, Jingjing; Bell, Robert J. A.; Smith, Robin P.; Ku, Angel A.; Shendure, Jay; Ahituv, Nadav
2014-01-01
In addition to their protein coding function, exons can also serve as transcriptional enhancers. Mutations in these exonic-enhancers (eExons) could alter both protein function and transcription. However, the functional consequence of eExon mutations is not well known. Here, using massively parallel reporter assays, we dissect the enhancer activity of three liver eExons (SORL1 exon 17, TRAF3IP2 exon 2, PPARG exon 6) at single nucleotide resolution in the mouse liver. We find that both synonymous and non-synonymous mutations have similar effects on enhancer activity and many of the deleterious mutation clusters overlap known liver-associated transcription factor binding sites. Carrying a similar massively parallel reporter assay in HeLa cells with these three eExons found differences in their mutation profiles compared to the liver, suggesting that enhancers could have distinct operating profiles in different tissues. Our results demonstrate that eExon mutations could lead to multiple phenotypes by disrupting both the protein sequence and enhancer activity and that enhancers can have distinct mutation profiles in different cell types. PMID:25340400
Some Aspects of uncertainty in computational fluid dynamics results
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1991-01-01
Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.
Deriving uncertainty factors for threshold chemical contaminants in drinking water.
Ritter, Leonard; Totman, Céline; Krishnan, Kannan; Carrier, Richard; Vézina, Anne; Morisset, Véronique
2007-10-01
Uncertainty factors are used in the development of drinking-water guidelines to account for uncertainties in the database, including extrapolations of toxicity from animal studies and variability within humans, which result in some uncertainty about risk. The application of uncertainty factors is entrenched in toxicological risk assessment worldwide, but is not applied consistently. This report, prepared in collaboration with Health Canada, provides an assessment of the derivation of the uncertainty factor assumptions used in developing drinking-water quality guidelines for chemical contaminants. Assumptions used by Health Canada in the development of guidelines were compared to several other major regulatory jurisdictions. This assessment has revealed that uncertainty factor assumptions have been substantially influenced by historical practice. While the application of specific uncertainty factors appears to be well entrenched in regulatory practice, a well-documented and disciplined basis for the selection of these factors was not apparent in any of the literature supporting the default assumptions of Canada, the United States, Australia, or the World Health Organization. While there is a basic scheme used in most cases in developing drinking-water quality guidelines for nonthreshold contaminants by the jurisdictions included in this report, additional factors are sometimes included to account for other areas of uncertainty. These factors may include extrapolating subchronic data to anticipated chronic exposure, or use of a LOAEL instead of a NOAEL. The default value attributed to each uncertainty factor is generally a factor of 3 or 10; however, again, no comprehensive guidance to develop and apply these additional uncertainty factors was evident from the literature reviewed. A decision tree has been developed to provide guidance for selection of appropriate uncertainty factors, to account for the range of uncertainty encountered in the risk assessment process
NASA Astrophysics Data System (ADS)
Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.
2016-06-01
We consider the calculation of the pion-photon transition form factor Fγ*γπ0(Q2) within light-cone sum rules focusing attention to the low-mid region of momenta. The central aim is to estimate the theoretical uncertainties which originate from a wide variety of sources related to (i) the relevance of next-to-next-to-leading order radiative corrections (ii) the influence of the twist-four and the twist-six term (iii) the sensitivity of the results on auxiliary parameters, like the Borel scale M2, (iv) the role of the phenomenological description of resonances, and (v) the significance of a small but finite virtuality of the quasireal photon. Predictions for Fγ*γπ0(Q2) are presented which include all these uncertainties and found to comply within the margin of experimental error with the existing data in the Q2 range between 1 and 5 GeV2 , thus justifying the reliability of the applied calculational scheme. This provides a solid basis for confronting theoretical predictions with forthcoming data bearing small statistical errors.
Rashidi, Armin; DiPersio, John F; Sandmaier, Brenda M; Colditz, Graham A; Weisdorf, Daniel J
2016-06-01
Despite extensive research in the last few decades, progress in treatment of acute graft-versus-host disease (aGVHD), a common complication of allogeneic hematopoietic cell transplantation (HCT), has been limited and steroids continue to be the standard frontline treatment. Randomized clinical trials (RCTs) have failed to find a beneficial effect of escalating immunosuppression using additional agents. Considering the small number of RCTs, limited sample sizes, and frequent early termination because of anticipated futility, we conducted a systematic review and an aggregate data meta-analysis to explore whether a true efficacy signal has been missed because of the limitations of individual RCTs. Seven reports met our inclusion criteria. The control arm in all studies was 2 mg/kg/day prednisone (or equivalent). The additional agent(s) used in the experimental arm(s) were higher-dose steroids, antithymocyte globulin, infliximab, anti-interleukin-2 receptor antibody (daclizumab and BT563), CD5-specific immunotoxin, and mycophenolate mofetil. Random effects meta-analysis revealed no efficacy signal in pooled response rates at various times points. Overall survival at 100 days was significantly worse in the experimental arm (relative risk [RR], .83; 95% confidence interval [CI], .74 to .94; P = .004, data from 3 studies) and showed a similar trend (albeit not statistically significantly) at 1 year as well (RR, .86; 95% CI, .68 to 1.09; P = .21, data from 5 studies). In conclusion, these results argue against the value of augmented generic immunosuppression beyond steroids for frontline treatment of aGVHD and emphasize the importance of developing alternative strategies. Novel forms of immunomodulation and targeted therapies against non-immune-related pathways may enhance the efficacy of steroids in this setting, and early predictive and prognostic biomarkers can help identify the subgroup of patients who would likely need treatments other than (or in addition to
Murtagh, Fliss EM
2014-01-01
Background: Primary care has the potential to play significant roles in providing effective palliative care for non-cancer patients. Aim: To identify, critically appraise and synthesise the existing evidence on views on the provision of palliative care for non-cancer patients by primary care providers and reveal any gaps in the evidence. Design: Standard systematic review and narrative synthesis. Data sources: MEDLINE, Embase, CINAHL, PsycINFO, Applied Social Science Abstract and the Cochrane library were searched in 2012. Reference searching, hand searching, expert consultations and grey literature searches complemented these. Papers with the views of patients/carers or professionals on primary palliative care provision to non-cancer patients in the community were included. The amended Hawker’s criteria were used for quality assessment of included studies. Results: A total of 30 studies were included and represent the views of 719 patients, 605 carers and over 400 professionals. In all, 27 studies are from the United Kingdom. Patients and carers expect primary care physicians to provide compassionate care, have appropriate knowledge and play central roles in providing care. The roles of professionals are unclear to patients, carers and professionals themselves. Uncertainty of illness trajectory and lack of collaboration between health-care professionals were identified as barriers to effective care. Conclusions: Effective interprofessional work to deal with uncertainty and maintain coordinated care is needed for better palliative care provision to non-cancer patients in the community. Research into and development of a best model for effective interdisciplinary work are needed. PMID:24821710
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
Assessing uncertainty in physical constants
NASA Astrophysics Data System (ADS)
Henrion, Max; Fischhoff, Baruch
1986-09-01
Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.
Caputo, Carmela; Prior, David; Inder, Warrick J
2015-11-01
Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years. PMID:25466526
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Exploring Uncertainty with Projectile Launchers
ERIC Educational Resources Information Center
Orzel, Chad; Reich, Gary; Marr, Jonathan
2012-01-01
The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…
Uncertainties of modelling emissions from road transport
NASA Astrophysics Data System (ADS)
Kühlwein, J.; Friedrich, R.
To determine emission data from road transport, complex methods and models are applied. Emission data are characterized by a huge variety of source types as well as a high resolution of the spatial allocation and temporal variation. So far, the uncertainties of such calculated emission data have been largely unknown. As emission data is used to aid policy decisions, the accuracy of the data should be known. So, in the following, the determination of uncertainties of emission data is described. Using the IER emission model for generating regional or national emission data, the uncertainties of model input data and the total errors on different aggregation levels are exemplarily investigated for the pollutants NO x and NMHC in 1994 for the area of West Germany. The results of statistical error analysis carried out for annual emissions on road sections show variation coefficients (68.3% confidence interval) of 15-25%. In addition, systematic errors of common input data sets have been identified especially affecting emissions on motorway sections. The statistical errors of urban emissions with warm engine on town level amount to 35%. Therefore they are considerably higher than the errors outside towns. Error ranges of additional cold start emissions determined so far have been found in the same order. Additional uncertainties of temporally highly resolved (hourly) emission data depend strongly on the daytime, the weekday and the road category. Variation coefficients have been determined in the range between 10 and 70% for light-duty vehicles and between 15 and 100% for heavy-duty vehicles. All total errors determined here have to be regarded as lower limits of the real total errors.
Uncertainty quantification for proton-proton fusion in chiral effective field theory
NASA Astrophysics Data System (ADS)
Acharya, B.; Carlsson, B. D.; Ekström, A.; Forssén, C.; Platter, L.
2016-09-01
We compute the S-factor of the proton-proton (pp) fusion reaction using chiral effective field theory (χEFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the pp cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of χEFT, (iii) the systematic uncertainty due to the χEFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon-nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold S-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent S-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the statistical correlations between the S-factor and few-nucleon observables such as the binding energies and point-proton radii of 2,3H and 3He as well as the D-state probability and quadrupole moment of 2H, and the β-decay of 3H. We find that, with the state-of-the-art optimization of the nuclear Hamiltonian, the statistical uncertainty in the threshold S-factor cannot be reduced beyond 0.7%.
Known and unknown unknowns: uncertainty estimation in satellite remote sensing
NASA Astrophysics Data System (ADS)
Povey, A. C.; Grainger, R. G.
2015-11-01
This paper discusses a best-practice representation of uncertainty in satellite remote sensing data. An estimate of uncertainty is necessary to make appropriate use of the information conveyed by a measurement. Traditional error propagation quantifies the uncertainty in a measurement due to well-understood perturbations in a measurement and in auxiliary data - known, quantified "unknowns". The under-constrained nature of most satellite remote sensing observations requires the use of various approximations and assumptions that produce non-linear systematic errors that are not readily assessed - known, unquantifiable "unknowns". Additional errors result from the inability to resolve all scales of variation in the measured quantity - unknown "unknowns". The latter two categories of error are dominant in under-constrained remote sensing retrievals, and the difficulty of their quantification limits the utility of existing uncertainty estimates, degrading confidence in such data. This paper proposes the use of ensemble techniques to present multiple self-consistent realisations of a data set as a means of depicting unquantified uncertainties. These are generated using various systems (different algorithms or forward models) believed to be appropriate to the conditions observed. Benefiting from the experience of the climate modelling community, an ensemble provides a user with a more complete representation of the uncertainty as understood by the data producer and greater freedom to consider different realisations of the data.
Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I
2016-01-01
Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy. PMID:10174798
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
NASA Astrophysics Data System (ADS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-03-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found.
Uncertainty quantification of effective nuclear interactions
NASA Astrophysics Data System (ADS)
Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz
2016-03-01
We give a brief review on the development of phenomenological NN interactions and the corresponding quantification of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean field calculations through the Skyrme parameters and effective field theory counterterms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different fitting strategies on the light of recent developments.
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Asymptotic entropic uncertainty relations
NASA Astrophysics Data System (ADS)
Adamczak, Radosław; Latała, Rafał; Puchała, Zbigniew; Życzkowski, Karol
2016-03-01
We analyze entropic uncertainty relations for two orthogonal measurements on a N-dimensional Hilbert space, performed in two generic bases. It is assumed that the unitary matrix U relating both bases is distributed according to the Haar measure on the unitary group. We provide lower bounds on the average Shannon entropy of probability distributions related to both measurements. The bounds are stronger than those obtained with use of the entropic uncertainty relation by Maassen and Uffink, and they are optimal up to additive constants. We also analyze the case of a large number of measurements and obtain strong entropic uncertainty relations, which hold with high probability with respect to the random choice of bases. The lower bounds we obtain are optimal up to additive constants and allow us to prove a conjecture by Wehner and Winter on the asymptotic behavior of constants in entropic uncertainty relations as the dimension tends to infinity. As a tool we develop estimates on the maximum operator norm of a submatrix of a fixed size of a random unitary matrix distributed according to the Haar measure, which are of independent interest.
Oldgren, Jonas; Wallentin, Lars; Alexander, John H.; James, Stefan; Jönelid, Birgitta; Steg, Gabriel; Sundström, Johan
2013-01-01
Background Oral anticoagulation in addition to antiplatelet treatment after an acute coronary syndrome might reduce ischaemic events but increase bleeding risk. We performed a meta-analysis to evaluate the efficacy and safety of adding direct thrombin or factor-Xa inhibition by any of the novel oral anticoagulants (apixaban, dabigatran, darexaban, rivaroxaban, and ximelagatran) to single (aspirin) or dual (aspirin and clopidogrel) antiplatelet therapy in this setting. Methods and results All seven published randomized, placebo-controlled phase II and III studies of novel oral anticoagulants in acute coronary syndromes were included. The database consisted of 30 866 patients, 4135 (13.4%) on single, and 26 731 (86.6%) on dual antiplatelet therapy, with a non-ST- or ST-elevation acute coronary syndrome within the last 7–14 days. We defined major adverse cardiovascular events (MACEs) as the composite of all-cause mortality, myocardial infarction, or stroke; and clinically significant bleeding as the composite of major and non-major bleeding requiring medical attention according to the study definitions. When compared with aspirin alone the combination of an oral anticoagulant and aspirin reduced the incidence of MACE [hazard ratio (HR) and 95% confidence interval 0.70; 0.59–0.84], but increased clinically significant bleeding (HR: 1.79; 1.54–2.09). Compared with dual antiplatelet therapy with aspirin and clopidogrel, adding an oral anticoagulant decreased the incidence of MACE modestly (HR: 0.87; 0.80–0.95), but more than doubled the bleeding (HR: 2.34; 2.06–2.66). Heterogeneity between studies was low, and results were similar when restricting the analysis to phase III studies. Conclusion In patients with a recent acute coronary syndrome, the addition of a new oral anticoagulant to antiplatelet therapy results in a modest reduction in cardiovascular events but a substantial increase in bleeding, most pronounced when new oral anticoagulants are combined with
Ahmadizar, Fariba; Onland-Moret, N. Charlotte; de Boer, Anthonius; Liu, Geoffrey; Maitland-van der Zee, Anke H.
2015-01-01
Aim To evaluate the efficacy and safety of bevacizumab in the adjuvant cancer therapy setting within different subset of patients. Methods & Design/ Results PubMed, EMBASE, Cochrane and Clinical trials.gov databases were searched for English language studies of randomized controlled trials comparing bevacizumab and adjuvant therapy with adjuvant therapy alone published from January 1966 to 7th of May 2014. Progression free survival, overall survival, overall response rate, safety and quality of life were analyzed using random- or fixed-effects models according to the PRISMA guidelines. We obtained data from 44 randomized controlled trials (30,828 patients). Combining bevacizumab with different adjuvant therapies resulted in significant improvement of progression free survival (log hazard ratio, 0.87; 95% confidence interval (CI), 0.84–0.89), overall survival (log hazard ratio, 0.96; 95% CI, 0.94–0.98) and overall response rate (relative risk, 1.46; 95% CI: 1.33–1.59) compared to adjuvant therapy alone in all studied tumor types. In subgroup analyses, there were no interactions of bevacizumab with baseline characteristics on progression free survival and overall survival, while overall response rate was influenced by tumor type and bevacizumab dose (p-value: 0.02). Although bevacizumab use resulted in additional expected adverse drug reactions except anemia and fatigue, it was not associated with a significant decline in quality of life. There was a trend towards a higher risk of several side effects in patients treated by high-dose bevacizumab compared to the low-dose e.g. all grade proteinuria (9.24; 95% CI: 6.60–12.94 vs. 2.64; 95% CI: 1.29–5.40). Conclusions Combining bevacizumab with different adjuvant therapies provides a survival benefit across all major subsets of patients, including by tumor type, type of adjuvant therapy, and duration and dose of bevacizumab therapy. Though bevacizumab was associated with increased risks of some adverse drug
Uncertainties in offsite consequence analysis
Young, M.L.; Harper, F.T.; Lui, C.H.
1996-03-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
Uncertainties in radiation flow experiments
NASA Astrophysics Data System (ADS)
Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.
2016-03-01
Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
Uncertainty Estimation in Intensity-Modulated Radiotherapy Absolute Dosimetry Verification
Sanchez-Doblado, Francisco . E-mail: paco@us.es; Hartmann, Guenther H.; Pena, Javier; Capote, Roberto; Paiusco, Marta; Rhein, Bernhard; Leal, Antonio; Lagares, Juan Ignacio
2007-05-01
Purpose: Intensity-modulated radiotherapy (IMRT) represents an important method for improving RT. The IMRT relative dosimetry checks are well established; however, open questions remain in reference dosimetry with ionization chambers (ICs). The main problem is the departure of the measurement conditions from the reference ones; thus, additional uncertainty is introduced into the dose determination. The goal of this study was to assess this effect systematically. Methods and Materials: Monte Carlo calculations and dosimetric measurements with five different detectors were performed for a number of representative IMRT cases, covering both step-and-shoot and dynamic delivery. Results: Using ICs with volumes of about 0.125 cm{sup 3} or less, good agreement was observed among the detectors in most of the situations studied. These results also agreed well with the Monte Carlo-calculated nonreference correction factors (c factors). Additionally, we found a general correlation between the IC position relative to a segment and the derived correction factor c, which can be used to estimate the expected overall uncertainty of the treatment. Conclusion: The increase of the reference dose relative standard uncertainty measured with ICs introduced by nonreference conditions when verifying an entire IMRT plan is about 1-1.5%, provided that appropriate small-volume chambers are used. The overall standard uncertainty of the measured IMRT dose amounts to about 2.3%, including the 0.5% of reproducibility and 1.5% of uncertainty associated with the beam calibration factor. Solid state detectors and large-volume chambers are not well suited to IMRT verification dosimetry because of the greater uncertainties. An action level of 5% is appropriate for IMRT verification. Greater discrepancies should lead to a review of the dosimetric procedure, including visual inspection of treatment segments and energy fluence.
The relationship between aerosol model uncertainty and radiative forcing uncertainty
NASA Astrophysics Data System (ADS)
Carslaw, Ken; Lee, Lindsay; Reddington, Carly
2016-04-01
There has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated aerosol-cloud forcing between pre-industrial and present day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the pre-industrial aerosol state. But the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are "equally acceptable" compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty, but this hides a range of very different aerosol models. These multiple so-called "equifinal" model variants predict a wide range of forcings. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.
Uncertainty-induced quantum nonlocality
NASA Astrophysics Data System (ADS)
Wu, Shao-xiong; Zhang, Jun; Yu, Chang-shui; Song, He-shan
2014-01-01
Based on the skew information, we present a quantity, uncertainty-induced quantum nonlocality (UIN) to measure the quantum correlation. It can be considered as the updated version of the original measurement-induced nonlocality (MIN) preserving the good computability but eliminating the non-contractivity problem. For 2×d-dimensional state, it is shown that UIN can be given by a closed form. In addition, we also investigate the maximal uncertainty-induced nonlocality.
Messaging climate change uncertainty
NASA Astrophysics Data System (ADS)
Cooke, Roger M.
2015-01-01
Climate change is full of uncertainty and the messengers of climate science are not getting the uncertainty narrative right. To communicate uncertainty one must first understand it, and then avoid repeating the mistakes of the past.
Uncertainty quantified trait predictions
NASA Astrophysics Data System (ADS)
Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter
2015-04-01
Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Communicating uncertainties in earth sciences in view of user needs
NASA Astrophysics Data System (ADS)
de Vries, Wim; Kros, Hans; Heuvelink, Gerard
2014-05-01
uncertain model parameters (parametric variability). These uncertainties can be quantified by uncertainty propagation methods such as Monte Carlo simulation methods. Examples of intrinsic uncertainties that generally cannot be expressed in mathematical terms are errors or biases in: • Results of experiments and observations due to inadequate sampling and errors in analyzing data in the laboratory and even in data reporting. • Results of (laboratory) experiments that are limited to a specific domain or performed under circumstances that differ from field circumstances. • Model structure, due to lack of knowledge of the underlying processes. Structural uncertainty, which may cause model inadequacy/ bias, is inherent in model approaches since models are approximations of reality. Intrinsic uncertainties often occur in an emerging field where ongoing new findings, either experiments or field observations of new model findings, challenge earlier work. In this context, climate scientists working within the IPCC have adopted a lexicon to communicate confidence in their findings, ranging from "very high", "high", "medium", "low" and "very low" confidence. In fact, there are also statistical methods to gain insight in uncertainties in model predictions due to model assumptions (i.e. model structural error). Examples are comparing model results with independent observations or a systematic intercomparison of predictions from multiple models. In the latter case, Bayesian model averaging techniques can be used, in which each model considered gets an assigned prior probability of being the 'true' model. This approach works well with statistical (regression) models, but extension to physically-based models is cumbersome. An alternative is the use of state-space models in which structural errors are represent as (additive) noise terms. In this presentation, we focus on approaches that are relevant at the science - policy interface, including multiple scientific disciplines and
Validation of an Experimentally Derived Uncertainty Model
NASA Technical Reports Server (NTRS)
Lim, K. B.; Cox, D. E.; Balas, G. J.; Juang, J.-N.
1996-01-01
The results show that uncertainty models can be obtained directly from system identification data by using a minimum norm model validation approach. The error between the test data and an analytical nominal model is modeled as a combination of unstructured additive and structured input multiplicative uncertainty. Robust controllers which use the experimentally derived uncertainty model show significant stability and performance improvements over controllers designed with assumed ad hoc uncertainty levels. Use of the identified uncertainty model also allowed a strong correlation between design predictions and experimental results.
Simple uncertainty propagation for early design phase aircraft sizing
NASA Astrophysics Data System (ADS)
Lenz, Annelise
Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the
Finite Frames and Graph Theoretic Uncertainty Principles
NASA Astrophysics Data System (ADS)
Koprowski, Paul J.
The subject of analytical uncertainty principles is an important field within harmonic analysis, quantum physics, and electrical engineering. We explore uncertainty principles in the context of the graph Fourier transform, and we prove additive results analogous to the multiplicative version of the classical uncertainty principle. We establish additive uncertainty principles for finite Parseval frames. Lastly, we examine the feasibility region of simultaneous values of the norms of a graph differential operator acting on a function f ∈ l2(G) and its graph Fourier transform.
Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.
2015-01-01
This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108
Neutrino Spectra and Uncertainties for MINOS
Kopp, Sacha
2008-02-21
The MINOS experiment at Fermilab has released an updated result on muon disappearance. The experiment utilizes the intense source of muon neutrinos provided by the NuMI beam line. This note summarizes the systematic uncertainties in the experiment's knowledge of the flux and energy spectrum of the neutrinos from NuMI.
An Approach of Uncertainty Evaluation for Thermal-Hydraulic Analysis
Katsunori Ogura; Hisashi Ninokata
2002-07-01
An approach to evaluate uncertainty systematically for thermal-hydraulic analysis programs is demonstrated. The approach is applied to the Peach Bottom Unit 2 Turbine Trip 2 Benchmark and is validated. (authors)
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Identifying uncertainties in Arctic climate change projections
NASA Astrophysics Data System (ADS)
Hodson, Daniel L. R.; Keeley, Sarah P. E.; West, Alex; Ridley, Jeff; Hawkins, Ed; Hewitt, Helene T.
2013-06-01
Wide ranging climate changes are expected in the Arctic by the end of the 21st century, but projections of the size of these changes vary widely across current global climate models. This variation represents a large source of uncertainty in our understanding of the evolution of Arctic climate. Here we systematically quantify and assess the model uncertainty in Arctic climate changes in two CO2 doubling experiments: a multimodel ensemble (CMIP3) and an ensemble constructed using a single model (HadCM3) with multiple parameter perturbations (THC-QUMP). These two ensembles allow us to assess the contribution that both structural and parameter variations across models make to the total uncertainty and to begin to attribute sources of uncertainty in projected changes. We find that parameter uncertainty is an major source of uncertainty in certain aspects of Arctic climate. But also that uncertainties in the mean climate state in the 20th century, most notably in the northward Atlantic ocean heat transport and Arctic sea ice volume, are a significant source of uncertainty for projections of future Arctic change. We suggest that better observational constraints on these quantities will lead to significant improvements in the precision of projections of future Arctic climate change.
Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F
2016-01-01
Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of
Direct Aerosol Forcing Uncertainty
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
Universal Uncertainty Relations
NASA Astrophysics Data System (ADS)
Gour, Gilad
2014-03-01
Uncertainty relations are a distinctive characteristic of quantum theory that imposes intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring non-commuting observables. However, I will show here that there is no fundamental reason for using entropies as quantifiers; in fact, any functional relation that characterizes the uncertainty of the measurement outcomes can be used to define an uncertainty relation. Starting from a simple assumption that any measure of uncertainty is non-decreasing under mere relabeling of the measurement outcomes, I will show that Schur-concave functions are the most general uncertainty quantifiers. I will then introduce a novel fine-grained uncertainty relation written in terms of a majorization relation, which generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary measures of uncertainty. This infinite family of uncertainty relations includes all the known entropic uncertainty relations, but is not limited to them. In this sense, the relation is universally valid and captures the essence of the uncertainty principle in quantum theory. This talk is based on a joint work with Shmuel Friedland and Vlad Gheorghiu. This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada and by the Pacific Institute for Mathematical Sciences (PIMS).
Fission Spectrum Related Uncertainties
G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores
2007-10-01
The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.
Two basic Uncertainty Relations in Quantum Mechanics
Angelow, Andrey
2011-04-07
In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schroedinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.
Two basic Uncertainty Relations in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Angelow, Andrey
2011-04-01
In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schrödinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...
Uncertainty and Sensitivity Analyses Plan
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Pore Velocity Estimation Uncertainties
NASA Astrophysics Data System (ADS)
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Uncertainty compliant design flood estimation
NASA Astrophysics Data System (ADS)
Botto, A.; Ganora, D.; Laio, F.; Claps, P.
2014-05-01
Hydraulic infrastructures are commonly designed with reference to target values of flood peak, estimated using probabilistic techniques, such as flood frequency analysis. The application of these techniques underlies levels of uncertainty, which are sometimes quantified but normally not accounted for explicitly in the decision regarding design discharges. The present approach aims at defining a procedure which enables the definition of Uncertainty Compliant Design (UNCODE) values of flood peaks. To pursue this goal, we first demonstrate the equivalence of the Standard design based on the return period and the cost-benefit procedure, when linear cost and damage functions are used. We then use this result to assign an expected cost to estimation errors, thus setting a framework to obtain a design flood estimator which minimizes the total expected cost. This procedure properly accounts for the uncertainty which is inherent in the frequency curve estimation. Applications of the UNCODE procedure to real cases leads to remarkable displacement of the design flood from the Standard values. UNCODE estimates are systematically larger than the Standard ones, with substantial differences (up to 55%) when large return periods or short data samples are considered.
Uncertainty and Cognitive Control
Mushtaq, Faisal; Bland, Amy R.; Schaefer, Alexandre
2011-01-01
A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the “need for control”; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders. PMID:22007181
Reducing Zero-point Systematics in Dark Energy Supernova Experiments
Faccioli, Lorenzo; Kim, Alex G; Miquel, Ramon; Bernstein, Gary; Bonissent, Alain; Brown, Matthew; Carithers, William; Christiansen, Jodi; Connolly, Natalia; Deustua, Susana; Gerdes, David; Gladney, Larry; Kushner, Gary; Linder, Eric; McKee, Shawn; Mostek, Nick; Shukla, Hemant; Stebbins, Albert; Stoughton, Chris; Tucker, David
2011-04-01
We study the effect of filter zero-point uncertainties on future supernova dark energy missions. Fitting for calibration parameters using simultaneous analysis of all Type Ia supernova standard candles achieves a significant improvement over more traditional fit methods. This conclusion is robust under diverse experimental configurations (number of observed supernovae, maximum survey redshift, inclusion of additional systematics). This approach to supernova fitting considerably eases otherwise stringent mission cali- bration requirements. As an example we simulate a space-based mission based on the proposed JDEM satellite; however the method and conclusions are general and valid for any future supernova dark energy mission, ground or space-based.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Quantifying uncertainty from material inhomogeneity.
Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee
2009-09-01
Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the
Systematic reviews need systematic searchers
McGowan, Jessie; Sampson, Margaret
2005-01-01
Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.
Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-09-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-04-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, including for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Uncertainty of decibel levels.
Taraldsen, Gunnar; Berge, Truls; Haukland, Frode; Lindqvist, Bo Henry; Jonasson, Hans
2015-09-01
The mean sound exposure level from a source is routinely estimated by the mean of the observed sound exposures from repeated measurements. A formula for the standard uncertainty based on the Guide to the expression of Uncertainty in Measurement (GUM) is derived. An alternative formula is derived for the case where the GUM method fails. The formulas are applied on several examples, and compared with a Monte Carlo calculation of the standard uncertainty. The recommended formula can be seen simply as a convenient translation of the uncertainty on an energy scale into the decibel level scale, but with a theoretical foundation. PMID:26428824
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Bartine, D.E.; Cacuci, D.G.
1983-09-13
This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)
Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...
Saccade Adaptation and Visual Uncertainty
Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.
2016-01-01
Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635
Attitudes toward Others Depend upon Self and Other Causal Uncertainty
Tobin, Stephanie J.; Osika, Matylda M.; McLanders, Mia
2014-01-01
People who are high in causal uncertainty doubt their own ability to understand the causes of social events. In three studies, we examined the effects of target and perceiver causal uncertainty on attitudes toward the target. Target causal uncertainty was manipulated via responses on a causal uncertainty scale in Studies 1 and 2, and with a scenario in Study 3. In Studies 1 and 2, we found that participants liked the low causal uncertainty target more than the high causal uncertainty target. This preference was stronger for low relative to high causal uncertainty participants because high causal uncertainty participants held more uncertain ideals. In Study 3, we examined the value individuals place upon causal understanding (causal importance) as an additional moderator. We found that regardless of their own causal uncertainty level, participants who were high in causal importance liked the low causal uncertainty target more than the high causal uncertainty target. However, when participants were low in causal importance, low causal uncertainty perceivers showed no preference and high causal uncertainty perceivers preferred the high causal uncertainty target. These findings reveal that goal importance and ideals can influence how perceivers respond to causal uncertainty in others. PMID:24504048
MOUSE UNCERTAINTY ANALYSIS SYSTEM
The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cost or risk analysis equations. t was especially intended for use by individuals with li...
Electoral Knowledge and Uncertainty.
ERIC Educational Resources Information Center
Blood, R. Warwick; And Others
Research indicates that the media play a role in shaping the information that voters have about election options. Knowledge of those options has been related to actual vote, but has not been shown to be strongly related to uncertainty. Uncertainty, however, does seem to motivate voters to engage in communication activities, some of which may…
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
Physical Uncertainty Bounds (PUB)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Economic uncertainty and econophysics
NASA Astrophysics Data System (ADS)
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
NASA Astrophysics Data System (ADS)
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits
Dankovic, D. A.; Naumann, B. D.; Maier, A.; Dourson, M. L.; Levy, L. S.
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties—typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also
Assessing MODIS Macrophysical Cloud Property Uncertainties
NASA Astrophysics Data System (ADS)
Maddux, B. C.; Ackerman, S. A.; Frey, R.; Holz, R.
2013-12-01
Cloud, being multifarious and ephemeral, is difficult to observe and quantify in a systematic way. Even basic terminology used to describe cloud observations is fraught with ambiguity in the scientific literature. Any observational technique, method, or platform will contain inherent and unavoidable measurement uncertainties. Quantifying these uncertainties in cloud observations is a complex task that requires an understanding of all aspects of the measurement. We will use cloud observations obtained from the Moderate Resolution Imaging Spectroradiameter(MODIS) to obtain metrics of the uncertainty of its cloud observations. Our uncertainty analyses will contain two main components, 1) an attempt to create a bias or uncertainty with respect to active measurements from CALIPSO and 2) a relative uncertainty within the MODIS cloud climatologies themselves. Our method will link uncertainty to the physical observation and its environmental/scene characteristics. Our aim is to create statistical uncertainties that are based on the cloud observational values, satellite view geometry, surface type, etc, for cloud amount and cloud top pressure. The MODIS instruments on the NASA Terra and Aqua satellites provide observations over a broad spectral range (36 bands between 0.415 and 14.235 micron) and high spatial resolution (250 m for two bands, 500 m for five bands, 1000 m for 29 bands), which the MODIS cloud mask algorithm (MOD35) utilizes to provide clear/cloud determinations over a wide array of surface types, solar illuminations and view geometries. For this study we use the standard MODIS products, MOD03, MOD06 and MOD35, all of which were obtained from the NASA Level 1 and Atmosphere Archive and Distribution System.
Uncertainty of upland soil carbon sink estimate for Finland
NASA Astrophysics Data System (ADS)
Lehtonen, Aleksi; Heikkinen, Juha
2016-04-01
Changes in the soil carbon stock of Finnish upland soils were quantified using forest inventory data, forest statistics, biomass models, litter turnover rates, and the Yasso07 soil model. Uncertainty in the estimated stock changes was assessed by combining model and sampling errors associated with the various data sources into variance-covariance matrices that allowed computationally efficient error propagation in the context of Yasso07 simulations. In sensitivity analysis, we found that the uncertainty increased drastically as a result of adding random year-to-year variation to the litter input. Such variation is smoothed out when using periodic inventory data with constant biomass models and turnover rates. Model errors (biomass, litter, understorey vegetation) and the systematic error of total drain had a marginal effect on the uncertainty regarding soil carbon stock change. Most of the uncertainty appears to be related to uncaptured annual variation in litter amounts. This is due to fact that variation in the slopes of litter input trends dictates the uncertainty of soil carbon stock change. If we assume that there is annual variation only in foliage and fine root litter rates and that this variation is less than 10% from year to year, then we can claim that Finnish upland forest soils have accumulated carbon during the first Kyoto period (2008-2012). The results of the study underline superiority of permanent sample plots compared to temporary ones, when soil model litter input trends have been estimated from forest inventory data. In addition, we also found that the use of IPCC guidelines leads to underestimation of the uncertainty of soil carbon stock change. This underestimation of the error results from the guidance to remove inter-annual variation from the model inputs, here illustrated with constant litter life spans. Model assumptions and model input estimation should be evaluated critically, when GHG-inventory results are used for policy planning
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-07-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
NASA Astrophysics Data System (ADS)
Povey, A. C.; Grainger, R. G.
2015-08-01
This paper discusses a best-practice representation of uncertainty in satellite remote sensing data. An estimate of uncertainty is necessary to make appropriate use of the information conveyed by a measurement. Traditional error propagation quantifies the uncertainty in a measurement due to well-understood perturbations in a measurement and auxiliary data - known, quantified "unknowns". The underconstrained nature of most satellite remote sensing observations requires the use of various approximations and assumptions that produce non-linear systematic errors that are not readily assessed - known, unquantifiable "unknowns". Additional errors result from the inability to resolve all scales of variation in the measured quantity - unknown "unknowns". The latter two categories of error are dominant in underconstrained remote sensing retrievals and the difficulty of their quantification limits the utility of existing uncertainty estimates, degrading confidence in such data. This paper proposes the use of ensemble techniques to present multiple self-consistent realisations of a data set as a means of depicting unquantified uncertainties. These are generated using various systems (different algorithms or forward models) believed to be appropriate to the conditions observed. Benefiting from the experience of the climate modelling community, an ensemble provides a user with a more complete representation of the uncertainty as understood by the data producer and greater freedom to consider different realisations of the data.
Uncertainty Evaluation of the Diffusive Gradients in Thin Films Technique
2015-01-01
Although the analytical performance of the diffusive gradients in thin films (DGT) technique is well investigated, there is no systematic analysis of the DGT measurement uncertainty and its sources. In this study we determine the uncertainties of bulk DGT measurements (not considering labile complexes) and of DGT-based chemical imaging using laser ablation - inductively coupled plasma mass spectrometry. We show that under well-controlled experimental conditions the relative combined uncertainties of bulk DGT measurements are ∼10% at a confidence interval of 95%. While several factors considerably contribute to the uncertainty of bulk DGT, the uncertainty of DGT LA-ICP-MS mainly depends on the signal variability of the ablation analysis. The combined uncertainties determined in this study support the use of DGT as a monitoring instrument. It is expected that the analytical requirements of legal frameworks, for example, the EU Drinking Water Directive, are met by DGT sampling. PMID:25579402
NASA Astrophysics Data System (ADS)
Carpenter, Kenneth; Currie, Philip J.
1992-07-01
In recent years dinosaurs have captured the attention of the public at an unprecedented level. At the heart of this resurgence in popular interest is an increased level of research activity, much of which is innovative in the field of paleontology. For instance, whereas earlier paleontological studies emphasized basic morphologic description and taxonomic classification, modern studies attempt to examine the role and nature of dinosaurs as living animals. More than ever before, we understand how these extinct species functioned, behaved, interacted with each other and the environment, and evolved. Nevertheless, these studies rely on certain basic building blocks of knowledge, including facts about dinosaur anatomy and taxonomic relationships. One of the purposes of this volume is to unravel some of the problems surrounding dinosaur systematics and to increase our understanding of dinosaurs as a biological species. Dinosaur Systematics presents a current overview of dinosaur systematics using various examples to explore what is a species in a dinosaur, what separates genders in dinosaurs, what morphological changes occur with maturation of a species, and what morphological variations occur within a species.
Extended uncertainty from first principles
NASA Astrophysics Data System (ADS)
Costa Filho, Raimundo N.; Braga, João P. M.; Lira, Jorge H. S.; Andrade, José S.
2016-04-01
A translation operator acting in a space with a diagonal metric is introduced to describe the motion of a particle in a quantum system. We show that the momentum operator and, as a consequence, the uncertainty relation now depend on the metric. It is also shown that, for any metric expanded up to second order, this formalism naturally leads to an extended uncertainty principle (EUP) with a minimum momentum dispersion. The Ehrenfest theorem is modified to include an additional term related to a tidal force arriving from the space curvature introduced by the metric. For one-dimensional systems, we show how to map a harmonic potential to an effective potential in Euclidean space using different metrics.
On the relationship between aerosol model uncertainty and radiative forcing uncertainty
NASA Astrophysics Data System (ADS)
Lee, Lindsay A.; Reddington, Carly L.; Carslaw, Kenneth S.
2016-05-01
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple “equifinal” models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model‑observation agreement could give a misleading impression of model robustness.
On the relationship between aerosol model uncertainty and radiative forcing uncertainty.
Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S
2016-05-24
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness. PMID:26848136
On the relationship between aerosol model uncertainty and radiative forcing uncertainty
Reddington, Carly L.; Carslaw, Kenneth S.
2016-01-01
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple “equifinal” models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model−observation agreement could give a misleading impression of model robustness. PMID:26848136
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
Statistical Uncertainty Analysis Applied to Criticality Calculation
Hartini, Entin; Andiwijayakusuma, Dinan; Susmikanti, Mike; Nursinta, A. W.
2010-06-22
In this paper, we present an uncertainty methodology based on a statistical approach, for assessing uncertainties in criticality prediction using monte carlo method due to uncertainties in the isotopic composition of the fuel. The methodology has been applied to criticality calculations with MCNP5 with additional stochastic input of the isotopic fuel composition. The stochastic input were generated using the latin hypercube sampling method based one the probability density function of each nuclide composition. The automatic passing of the stochastic input to the MCNP and the repeated criticality calculation is made possible by using a python script to link the MCNP and our latin hypercube sampling code.
Incorporating climate change into systematic conservation planning
Groves, Craig R.; Game, Edward T.; Anderson, Mark G.; Cross, Molly; Enquist, Carolyn; Ferdana, Zach; Girvetz, Evan; Gondor, Anne; Hall, Kimberly R.; Higgins, Jonathan; Marshall, Rob; Popper, Ken; Schill, Steve; Shafer, Sarah L.
2012-01-01
The principles of systematic conservation planning are now widely used by governments and non-government organizations alike to develop biodiversity conservation plans for countries, states, regions, and ecoregions. Many of the species and ecosystems these plans were designed to conserve are now being affected by climate change, and there is a critical need to incorporate new and complementary approaches into these plans that will aid species and ecosystems in adjusting to potential climate change impacts. We propose five approaches to climate change adaptation that can be integrated into existing or new biodiversity conservation plans: (1) conserving the geophysical stage, (2) protecting climatic refugia, (3) enhancing regional connectivity, (4) sustaining ecosystem process and function, and (5) capitalizing on opportunities emerging in response to climate change. We discuss both key assumptions behind each approach and the trade-offs involved in using the approach for conservation planning. We also summarize additional data beyond those typically used in systematic conservation plans required to implement these approaches. A major strength of these approaches is that they are largely robust to the uncertainty in how climate impacts may manifest in any given region.
Multi-thresholds for fault isolation in the presence of uncertainties.
Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel
2016-05-01
Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. PMID:26928518
Communicating scientific uncertainty
Fischhoff, Baruch; Davis, Alex L.
2014-01-01
All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390
Evaluating prediction uncertainty
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.
Conundrums with uncertainty factors.
Cooke, Roger
2010-03-01
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Berglund, F
1978-01-01
The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI
SYSTEMATIC SENSITIVITY ANALYSIS OF AIR QUALITY SIMULATION MODELS
This report reviews and assesses systematic sensitivity and uncertainty analysis methods for applications to air quality simulation models. The discussion of the candidate methods presents their basic variables, mathematical foundations, user motivations and preferences, computer...
Quantifying Uncertainty in Epidemiological Models
Ramanathan, Arvind; Jha, Sumit Kumar
2012-01-01
Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.
Uncertainty in flood risk mapping
NASA Astrophysics Data System (ADS)
Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo
2014-05-01
A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow
Classification images with uncertainty
Tjan, Bosco S.; Nandy, Anirvan S.
2009-01-01
Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
Uncertainty quantification in lattice QCD calculations for nuclear physics
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-02-05
The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Uncertainty quantification in lattice QCD calculations for nuclear physics
NASA Astrophysics Data System (ADS)
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-03-01
The numerical technique of lattice quantum chromodynamics (LQCD) holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. We review the sources of uncertainty inherent in LQCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Uncertainties in gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Lépy, M. C.; Pearce, A.; Sima, O.
2015-06-01
High resolution gamma-ray spectrometry is a well-established metrological technique that can be applied to a large number of photon-emitting radionuclides, activity levels and sample shapes and compositions. Three kinds of quantitative information can be derived using this technique: detection efficiency calibration, radionuclide activity and photon emission intensities. In contrast to other radionuclide measurement techniques gamma-ray spectrometry provides unambiguous identification of gamma-ray emitting radionuclides in addition to activity values. This extra information comes at a cost of increased complexity and inherently higher uncertainties when compared with other secondary techniques. The relative combined standard uncertainty associated with any result obtained by gamma-ray spectrometry depends not only on the uncertainties of the main input parameters but also on different correction factors. To reduce the uncertainties, the experimental conditions must be optimized in terms of the signal processing electronics and the physical parameters of the measured sample should be accurately characterized. Measurement results and detailed examination of the associated uncertainties are presented with a specific focus on the efficiency calibration, peak area determination and correction factors. It must be noted that some of the input values used in quantitative analysis calculation can be correlated, which should be taken into account in fitting procedures or calculation of the uncertainties associated with quantitative results. It is shown that relative combined standard uncertainties are rarely lower than 1% in gamma-ray spectrometry measurements.
Estimating uncertainty of inference for validation
Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Experimental Basis for Robust On-orbit Uncertainty Estimates for CLARREO InfraRed Sensors
NASA Astrophysics Data System (ADS)
Dykema, J. A.; Revercomb, H. E.; Anderson, J.
2009-12-01
As defined by the National Research Council Decadal Survey of 2006, the CLimate Absolute Radiance and REfractivity Observatory (CLARREO) satisfies the need for “a long-term global benchmark record of critical climate variables that are accurate over very long time periods, can be tested for systematic errors by future generations, are unaffected by interruption, and are pinned to international standards.” These observational requirements— testing for systematic errors, accuracy over indefinite time, and linkage to internationally recognized measurement standards—are achievable through an appeal to the concept of SI traceability. That is, measurements are made such that they are linked through an unbroken chain of comparisons, where each comparison has a stated and credible uncertainty, back to the definitions of the International System (SI) Units. While the concept of SI traceability is a straightforward one, achieving credible estimates of uncertainty, particularly in the case of complex sensors deployed in orbit, poses a significant challenge. Recently, a set of principles has been proposed to guide the development of sensors that realize fully the benefits of SI traceability. The application of these principles to the spectral infrared sensor that is part of the CLARREO mission is discussed. These principles include, but are not limited to: basing the sensor calibration on a reproducible physical property of matter, devising experimental tests for known sources of measurement bias (or systematic uncertainty), and providing independent system-level checks for the end-to-end radiometric performance of the sensor. The application of these principles to the infrared sensor leads to the following conclusions. To obtain the lowest uncertainty (or highest accuracy), the calibration should be traceable to the definition of the Kelvin—that is, the triple point of water. Realization of a Kelvin-based calibration is achieved through the use of calibration
Practical issues in handling data input and uncertainty in a budget impact analysis.
Nuijten, M J C; Mittendorf, T; Persson, U
2011-06-01
The objective of this paper was to address the importance of dealing systematically and comprehensively with uncertainty in a budget impact analysis (BIA) in more detail. The handling of uncertainty in health economics was used as a point of reference for addressing the uncertainty in a BIA. This overview shows that standard methods of sensitivity analysis, which are used for standard data set in a health economic model (clinical probabilities, treatment patterns, resource utilisation and prices/tariffs), cannot always be used for the input data for the BIA model beyond the health economic data set for various reasons. Whereas in a health economic model, only limited data may come from a Delphi panel, a BIA model often relies on a majority of data taken from a Delphi panel. In addition, the dataset in a BIA model also includes forecasts (e.g. annual growth, uptakes curves, substitution effects, changes in prescription restrictions and guidelines, future distribution of the available treatment modalities, off-label use). As a consequence, the use of standard sensitivity analyses for BIA data set might be limited because of the lack of appropriate distributions as data sources are limited, or because of the need for forecasting. Therefore, scenario analyses might be more appropriate to capture the uncertainty in the BIA data set in the overall BIA model. PMID:20364289
NASA Astrophysics Data System (ADS)
Jones, P. W.; Strelitz, R. A.
2012-12-01
The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs
Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2012-01-01
Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.
Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2013-01-01
Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.
Interpreting uncertainty terms.
Holtgraves, Thomas
2014-08-01
Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on. PMID:25090127
Uncertainty analysis of statistical downscaling methods
NASA Astrophysics Data System (ADS)
Khan, Mohammad Sajjad; Coulibaly, Paulin; Dibike, Yonas
2006-03-01
Three downscaling models namely Statistical Down-Scaling Model (SDSM), Long Ashton Research Station Weather Generator (LARS-WG) model and Artificial Neural Network (ANN) model have been compared in terms various uncertainty assessments exhibited in their downscaled results of daily precipitation, daily maximum and minimum temperatures. In case of daily maximum and minimum temperature, uncertainty is assessed by comparing monthly mean and variance of downscaled and observed daily maximum and minimum temperature at each month of the year at 95% confidence level. In addition, uncertainties of the monthly means and variances of downscaled daily temperature have been calculated using 95% confidence intervals, which are compared with the observed uncertainties of means and variances. In daily precipitation downscaling, in addition to comparing means and variances, uncertainties have been assessed by comparing monthly mean dry and wet spell lengths and their confidence intervals, cumulative frequency distributions (cdfs) of monthly mean of daily precipitation, and the distributions of monthly wet and dry days for observed and downscaled daily precipitation. The study has been carried out using 40 years of observed and downscaled daily precipitation, daily maximum and minimum temperature data using NCEP (National Center for Environmental Prediction) reanalysis predictors starting from 1961 to 2000. The uncertainty assessment results indicate that the SDSM is the most capable of reproducing various statistical characteristics of observed data in its downscaled results with 95% confidence level, the ANN is the least capable in this respect, and the LARS-WG is in between SDSM and ANN.
Stronger Schrödinger-like uncertainty relations
NASA Astrophysics Data System (ADS)
Song, Qiu-Cheng; Qiao, Cong-Feng
2016-08-01
Uncertainty relation is one of the fundamental building blocks of quantum theory. Nevertheless, the traditional uncertainty relations do not fully capture the concept of incompatible observables. Here we present a stronger Schrödinger-like uncertainty relation, which is stronger than the relation recently derived by Maccone and Pati (2014) [11]. Furthermore, we give an additive uncertainty relation which holds for three incompatible observables, which is stronger than the relation newly obtained by Kechrimparis and Weigert (2014) [12] and the simple extension of the Schrödinger uncertainty relation.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Quantifying reliability uncertainty : a proof of concept.
Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.
2009-10-01
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
Measurement uncertainty relations
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Measurement uncertainty relations
NASA Astrophysics Data System (ADS)
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-01
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Individuals’ Uncertainty about Future Social Security Benefits and Portfolio Choice
Delavande, Adeline
2013-01-01
Summary Little is known about the degree to which individuals are uncertain about their future Social Security benefits, how this varies within the U.S. population, and whether this uncertainty influences financial decisions related to retirement planning. To illuminate these issues, we present empirical evidence from the Health and Retirement Study Internet Survey and document systematic variation in respondents’ uncertainty about their future Social Security benefits by individual characteristics. We find that respondents with higher levels of uncertainty about future benefits hold a smaller share of their wealth in stocks. PMID:23914049
Serenity in political uncertainty.
Doumit, Rita; Afifi, Rema A; Devon, Holli A
2015-01-01
College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930
Systematic Effects in Atomic Fountain Clocks
NASA Astrophysics Data System (ADS)
Gibble, Kurt
2016-06-01
We describe recent advances in the accuracies of atomic fountain clocks. New rigorous treatments of the previously large systematic uncertainties, distributed cavity phase, microwave lensing, and background gas collisions, enabled these advances. We also discuss background gas collisions of optical lattice and ion clocks and derive the smooth transition of the microwave lensing frequency shift to photon recoil shifts for large atomic wave packets.
Weighted Uncertainty Relations
NASA Astrophysics Data System (ADS)
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-03-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation.
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-01-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295
NASA Technical Reports Server (NTRS)
Brown, Laurie M.
1993-01-01
An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.
NASA Astrophysics Data System (ADS)
Silverman, Mark P.
2014-07-01
1. Tools of the trade; 2. The 'fundamental problem' of a practical physicist; 3. Mother of all randomness I: the random disintegration of matter; 4. Mother of all randomness II: the random creation of light; 5. A certain uncertainty; 6. Doing the numbers: nuclear physics and the stock market; 7. On target: uncertainties of projectile flight; 8. The guesses of groups; 9. The random flow of energy I: power to the people; 10. The random flow of energy II: warning from the weather underground; Index.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Measurement uncertainty of adsorption testing of desiccant materials
Bingham, C E; Pesaran, A A
1988-12-01
The technique of measurement uncertainty analysis as described in the current ANSI/ASME standard is applied to the testing of desiccant materials in SERI`s Sorption Test Facility. This paper estimates the elemental precision and systematic errors in these tests and propagates them separately to obtain the resulting uncertainty of the test parameters, including relative humidity ({plus_minus}.03) and sorption capacity ({plus_minus}.002 g/g). Errors generated by instrument calibration, data acquisition, and data reduction are considered. Measurement parameters that would improve the uncertainty of the results are identified. Using the uncertainty in the moisture capacity of a desiccant, the design engineer can estimate the uncertainty in performance of a dehumidifier for desiccant cooling systems with confidence. 6 refs., 2 figs., 8 tabs.
Uncertainties in Hauser-Feshbach Neutron Capture Calculations for Astrophysics
Bertolli, M.G. Kawano, T.; Little, H.
2014-06-15
The calculation of neutron capture cross sections in a statistical Hauser-Feshbach method has proved successful in numerous astrophysical applications. Of increasing interest is the uncertainty associated with the calculated Maxwellian averaged cross sections (MACS). Aspects of a statistical model that introduce a large amount of uncertainty are the level density model, γ-ray strength function parameter, and the placement of E{sub low} – the cut-off energy below which the Hauser-Feshbach method is not applicable. Utilizing the Los Alamos statistical model code CoH3 we investigate the appropriate treatment of these sources of uncertainty via systematics of nuclei in a local region for which experimental or evaluated data is available. In order to show the impact of uncertainty analysis on nuclear data for astrophysical applications, these new uncertainties will be propagated through the nucleosynthesis code NuGrid.
Avoiding climate change uncertainties in Strategic Environmental Assessment
Larsen, Sanne Vammen; Kørnøv, Lone; Driscoll, Patrick
2013-11-15
This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models.
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.; Grupa, J.B.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models.
Analysis of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Nicholson, Thomas J.; Mishra, Srikanta
2003-06-24
A systematic methodology for assessing hydrogeologic conceptual model, parameter, and scenario uncertainties is being developed to support technical reviews of environmental assessments related to decommissioning of nuclear facilities. The first major task being undertaken is to produce a coupled parameter and conceptual model uncertainty assessment methodology. This task is based on previous studies that have primarily dealt individually with these two types of uncertainties. Conceptual model uncertainty analysis is based on the existence of alternative conceptual models that are generated using a set of clearly stated guidelines targeted at the needs of NRC staff. Parameter uncertainty analysis makes use of generic site characterization data as well as site-specific characterization and monitoring data to evaluate parameter uncertainty in each of the alternative conceptual models. Propagation of parameter uncertainty will be carried out through implementation of a general stochastic model of groundwater flow and transport in the saturated and unsaturated zones. Evaluation of prediction uncertainty will make use of Bayesian model averaging and visualization of model results. The goal of this study is to develop a practical tool to quantify uncertainties in the conceptual model and parameters identified in performance assessments.
Uncertainty in NIST Force Measurements
Bartel, Tom
2005-01-01
This paper focuses upon the uncertainty of force calibration measurements at the National Institute of Standards and Technology (NIST). The uncertainty of the realization of force for the national deadweight force standards at NIST is discussed, as well as the uncertainties associated with NIST’s voltage-ratio measuring instruments and with the characteristics of transducers being calibrated. The combined uncertainty is related to the uncertainty of dissemination for force transfer standards sent to NIST for calibration. PMID:27308181
NASA Astrophysics Data System (ADS)
Thyer, Mark; Renard, Benjamin; Kavetski, Dmitri; Kuczera, George; Franks, Stewart William; Srikanthan, Sri
2009-12-01
The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall-runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to address two key requirements of uncertainty assessment: (1) reliable quantification of predictive uncertainty and (2) reliable estimation of parameter uncertainty. The case study presents a challenging calibration of the lumped GR4J model to a catchment with ephemeral responses and large rainfall gradients. Postcalibration diagnostics, including checks of predictive distributions using quantile-quantile analysis, suggest that while still far from perfect, BATEA satisfied its assumed probability models better than SLS and WLS. In addition, WLS/SLS parameter estimates were highly dependent on the selected rain gauge and calibration period. This will obscure potential relationships between CRR parameters and catchment attributes and prevent the development of meaningful regional relationships. Conversely, BATEA provided consistent, albeit more uncertain, parameter estimates and thus overcomes one of the obstacles to parameter regionalization. However, significant departures from the calibration assumptions remained even in BATEA, e.g., systematic overestimation of predictive uncertainty, especially in validation. This is likely due to the inferred rainfall errors compensating for simplified treatment of model structural error.
Uncertainty in Computational Aerodynamics
NASA Technical Reports Server (NTRS)
Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.
2003-01-01
An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.
ERIC Educational Resources Information Center
Wargo, John
1985-01-01
Draws conclusions on the scientific uncertainty surrounding most chemical use regulatory decisions, examining the evolution of law and science, benefit analysis, and improving information. Suggests: (1) rapid development of knowledge of chemical risks and (2) a regulatory system which is flexible to new scientific knowledge. (DH)
Uncertainties in repository modeling
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Uncertainty and nonseparability
NASA Astrophysics Data System (ADS)
de La Torre, A. C.; Catuogno, P.; Ferrando, S.
1989-06-01
A quantum covariance function is introduced whose real and imaginary parts are related to the independent contributions to the uncertainty principle: noncommutativity of the operators and nonseparability. It is shown that factorizability of states is a sufficient but not necessary condition for separability. It is suggested that all quantum effects could be considered to be a consequence of nonseparability alone.
ICYESS 2013: Understanding and Interpreting Uncertainty
NASA Astrophysics Data System (ADS)
Rauser, F.; Niederdrenk, L.; Schemann, V.; Schmidt, A.; Suesser, D.; Sonntag, S.
2013-12-01
We will report the outcomes and highlights of the Interdisciplinary Conference of Young Earth System Scientists (ICYESS) on Understanding and Interpreting Uncertainty in September 2013, Hamburg, Germany. This conference is aimed at early career scientists (Masters to Postdocs) from a large variety of scientific disciplines and backgrounds (natural, social and political sciences) and will enable 3 days of discussions on a variety of uncertainty-related aspects: 1) How do we deal with implicit and explicit uncertainty in our daily scientific work? What is uncertain for us, and for which reasons? 2) How can we communicate these uncertainties to other disciplines? E.g., is uncertainty in cloud parameterization and respectively equilibrium climate sensitivity a concept that is understood equally well in natural and social sciences that deal with Earth System questions? Or vice versa, is, e.g., normative uncertainty as in choosing a discount rate relevant for natural scientists? How can those uncertainties be reconciled? 3) How can science communicate this uncertainty to the public? Is it useful at all? How are the different possible measures of uncertainty understood in different realms of public discourse? Basically, we want to learn from all disciplines that work together in the broad Earth System Science community how to understand and interpret uncertainty - and then transfer this understanding to the problem of how to communicate with the public, or its different layers / agents. ICYESS is structured in a way that participation is only possible via presentation, so every participant will give their own professional input into how the respective disciplines deal with uncertainty. Additionally, a large focus is put onto communication techniques; there are no 'standard presentations' in ICYESS. Keynote lectures by renowned scientists and discussions will lead to a deeper interdisciplinary understanding of what we do not really know, and how to deal with it. Many
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Optimal uncertainty quantification with model uncertainty and legacy data
NASA Astrophysics Data System (ADS)
Kamga, P.-H. T.; Li, B.; McKerns, M.; Nguyen, L. H.; Ortiz, M.; Owhadi, H.; Sullivan, T. J.
2014-12-01
We present an optimal uncertainty quantification (OUQ) protocol for systems that are characterized by an existing physics-based model and for which only legacy data is available, i.e., no additional experimental testing of the system is possible. Specifically, the OUQ strategy developed in this work consists of using the legacy data to establish, in a probabilistic sense, the level of error of the model, or modeling error, and to subsequently use the validated model as a basis for the determination of probabilities of outcomes. The quantification of modeling uncertainty specifically establishes, to a specified confidence, the probability that the actual response of the system lies within a certain distance of the model. Once the extent of model uncertainty has been established in this manner, the model can be conveniently used to stand in for the actual or empirical response of the system in order to compute probabilities of outcomes. To this end, we resort to the OUQ reduction theorem of Owhadi et al. (2013) in order to reduce the computation of optimal upper and lower bounds on probabilities of outcomes to a finite-dimensional optimization problem. We illustrate the resulting UQ protocol by means of an application concerned with the response to hypervelocity impact of 6061-T6 Aluminum plates by Nylon 6/6 impactors at impact velocities in the range of 5-7 km/s. The ability of the legacy OUQ protocol to process diverse information on the system and its ability to supply rigorous bounds on system performance under realistic-and less than ideal-scenarios demonstrated by the hypervelocity impact application is remarkable.
NASA Astrophysics Data System (ADS)
Luna, D.; Alexander, P.; de la Torre, A.
2013-09-01
The application of the Global Positioning System (GPS) radio occultation (RO) method to the atmosphere enables the determination of height profiles of temperature, among other variables. From these measurements, gravity wave activity is usually quantified by calculating the potential energy through the integration of the ratio of perturbation and background temperatures between two given altitudes in each profile. The uncertainty in the estimation of wave activity depends on the systematic biases and random errors of the measured temperature, but also on additional factors like the selected vertical integration layer and the separation method between background and perturbation temperatures. In this study, the contributions of different parameters and variables to the uncertainty in the calculation of gravity wave potential energy in the lower stratosphere are investigated and quantified. In particular, a Monte Carlo method is used to evaluate the uncertainty that results from different GPS RO temperature error distributions. In addition, our analysis shows that RO data above 30 km height becomes dubious for gravity waves potential energy calculations.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R., Jr.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
NASA Astrophysics Data System (ADS)
Weihs, Philipp; Staiger, Henning; Tinz, Birger; Batchvarova, Ekaterina; Rieder, Harald; Vuilleumier, Laurent; Maturilli, Marion; Jendritzky, Gerd
2012-05-01
In the present study, we investigate the determination accuracy of the Universal Thermal Climate Index (UTCI). We study especially the UTCI uncertainties due to uncertainties in radiation fluxes, whose impacts on UTCI are evaluated via the mean radiant temperature ( Tmrt). We assume "normal conditions", which means that usual meteorological information and data are available but no special additional measurements. First, the uncertainty arising only from the measurement uncertainties of the meteorological data is determined. Here, simulations show that uncertainties between 0.4 and 2 K due to the uncertainty of just one of the meteorological input parameters may be expected. We then analyse the determination accuracy when not all radiation data are available and modelling of the missing data is required. Since radiative transfer models require a lot of information that is usually not available, we concentrate only on the determination accuracy achievable with empirical models. The simulations show that uncertainties in the calculation of the diffuse irradiance may lead to Tmrt uncertainties of up to ±2.9 K. If long-wave radiation is missing, we may expect an uncertainty of ±2 K. If modelling of diffuse radiation and of longwave radiation is used for the calculation of Tmrt, we may then expect a determination uncertainty of ±3 K. If all radiative fluxes are modelled based on synoptic observation, the uncertainty in Tmrt is ±5.9 K. Because Tmrt is only one of the four input data required in the calculation of UTCI, the uncertainty in UTCI due to the uncertainty in radiation fluxes is less than ±2 K. The UTCI uncertainties due to uncertainties of the four meteorological input values are not larger than the 6 K reference intervals of the UTCI scale, which means that UTCI may only be wrong by one UTCI scale. This uncertainty may, however, be critical at the two temperature extremes, i.e. under extreme hot or extreme cold conditions.
Sources of uncertainty in flood inundation maps
Bales, J.D.; Wagner, C.R.
2009-01-01
Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.
Uncertainty of Pyrometers in a Casting Facility
Mee, D.K.; Elkins, J.E.; Fleenor, R.M.; Morrision, J.M.; Sherrill, M.W.; Seiber, L.E.
2001-12-07
This work has established uncertainty limits for the EUO filament pyrometers, digital pyrometers, two-color automatic pyrometers, and the standards used to certify these instruments (Table 1). If symmetrical limits are used, filament pyrometers calibrated in Production have certification uncertainties of not more than {+-}20.5 C traceable to NIST over the certification period. Uncertainties of these pyrometers were roughly {+-}14.7 C before introduction of the working standard that allowed certification in the field. Digital pyrometers addressed in this report have symmetrical uncertainties of not more than {+-}12.7 C or {+-}18.1 C when certified on a Y-12 Standards Laboratory strip lamp or in a production area tube furnace, respectively. Uncertainty estimates for automatic two-color pyrometers certified in Production are {+-}16.7 C. Additional uncertainty and bias are introduced when measuring production melt temperatures. A -19.4 C bias was measured in a large 1987 data set which is believed to be caused primarily by use of Pyrex{trademark} windows (not present in current configuration) and window fogging. Large variability (2{sigma} = 28.6 C) exists in the first 10 m of the hold period. This variability is attributed to emissivity variation across the melt and reflection from hot surfaces. For runs with hold periods extending to 20 m, the uncertainty approaches the calibration uncertainty of the pyrometers. When certifying pyrometers on a strip lamp at the Y-12 Standards Laboratory, it is important to limit ambient temperature variation (23{+-}4 C), to order calibration points from high to low temperatures, to allow 6 m for the lamp to reach thermal equilibrium (12 m for certifications below 1200 C) to minimize pyrometer bias, and to calibrate the pyrometer if error exceeds vendor specifications. A procedure has been written to assure conformance.
Temporal uncertainty of geographical information
NASA Astrophysics Data System (ADS)
Shu, Hong; Qi, Cuihong
2005-10-01
Temporal uncertainty is a crossing point of temporal and error-aware geographical information systems. In Geoinformatics, temporal uncertainty is of the same importance as spatial and thematic uncertainty of geographical information. However, until very recently, the standard organizations of ISO/TC211 and FGDC subsequently claimed that temporal uncertainty is one of geospatial data quality elements. Over the past decades, temporal uncertainty of geographical information is modeled insufficiently. To lay down a foundation of logically or physically modeling temporal uncertainty, this paper is aimed to clarify the semantics of temporal uncertainty to some extent. The general uncertainty is conceptualized with a taxonomy of uncertainty. Semantically, temporal uncertainty is progressively classified into uncertainty of time coordinates, changes, and dynamics. Uncertainty of multidimensional time (valid time, database time, and conceptual time, etc.) has been emphasized. It is realized that time scale (granularity) transition may lead to temporal uncertainty because of missing transition details. It is dialectically concluded that temporal uncertainty is caused by the complexity of the human-machine-earth system.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
On the uncertainties of stellar mass estimates via colour measurements
NASA Astrophysics Data System (ADS)
Roediger, Joel C.; Courteau, Stéphane
2015-09-01
Mass-to-light versus colour relations (MLCRs), derived from stellar population synthesis models, are widely used to estimate galaxy stellar masses (M*), yet a detailed investigation of their inherent biases and limitations is still lacking. We quantify several potential sources of uncertainty, using optical and near-infrared (NIR) photometry for a representative sample of nearby galaxies from the Virgo cluster. Our method for combining multiband photometry with MLCRs yields robust stellar masses, while errors in M* decrease as more bands are simultaneously considered. The prior assumptions in one's stellar population modelling dominate the error budget, creating a colour-dependent bias of up to 0.6 dex if NIR fluxes are used (0.3 dex otherwise). This matches the systematic errors associated with the method of spectral energy distribution (SED) fitting, indicating that MLCRs do not suffer from much additional bias. Moreover, MLCRs and SED fitting yield similar degrees of random error (˜0.1-0.14 dex) when applied to mock galaxies and, on average, equivalent masses for real galaxies with M* ˜ 108-11 M⊙. The use of integrated photometry introduces additional uncertainty in M* measurements, at the level of 0.05-0.07 dex. We argue that using MLCRs, instead of time-consuming SED fits, is justified in cases with complex model parameter spaces (involving, for instance, multiparameter star formation histories) and/or for large data sets. Spatially resolved methods for measuring M* should be applied for small sample sizes and/or when accuracies less than 0.1 dex are required. An appendix provides our MLCR transformations for 10 colour permutations of the grizH filter set.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
Rearden, Bradley T; Mueller, Don
2011-01-01
additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.
BICEP2 III: Instrumental systematics
Ade, P. A. R.
2015-11-23
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10^{–3}.
BICEP2 III: Instrumental Systematics
NASA Astrophysics Data System (ADS)
BICEP2 Collaboration; Ade, P. A. R.; Aikin, R. W.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Bock, J. J.; Brevik, J. A.; Buder, I.; Bullock, E.; Dowell, C. D.; Duband, L.; Filippini, J. P.; Fliescher, S.; Golwala, S. R.; Halpern, M.; Hasselfield, M.; Hildebrandt, S. R.; Hilton, G. C.; Irwin, K. D.; Karkare, K. S.; Kaufman, J. P.; Keating, B. G.; Kernasovskiy, S. A.; Kovac, J. M.; Kuo, C. L.; Leitch, E. M.; Lueker, M.; Netterfield, C. B.; Nguyen, H. T.; O'Brient, R.; Ogburn, R. W., IV; Orlando, A.; Pryke, C.; Richter, S.; Schwarz, R.; Sheehy, C. D.; Staniszewski, Z. K.; Sudiwala, R. V.; Teply, G. P.; Tolan, J. E.; Turner, A. D.; Vieregg, A. G.; Wong, C. L.; Yoon, K. W.
2015-12-01
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the BICEP2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in BICEP2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ∼10× below BICEP2's three-year statistical uncertainty, and negligible compared to the observed BB signal. The contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3-6) × 10-3.
BICEP2 III: Instrumental systematics
Ade, P. A. R.
2015-11-23
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading ordermore » beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10–3.« less
Uncertainties in climate stabilization
Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.
2009-11-01
We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Thyroid disrupting chemicals in plastic additives and thyroid health.
Andra, Syam S; Makris, Konstantinos C
2012-01-01
The globally escalating thyroid nodule incidence rates may be only partially ascribed to better diagnostics, allowing for the assessment of environmental risk factors on thyroid disease. Endocrine disruptors or thyroid-disrupting chemicals (TDC) like bisphenol A, phthalates, and polybrominated diphenyl ethers are widely used as plastic additives in consumer products. This comprehensive review studied the magnitude and uncertainty of TDC exposures and their effects on thyroid hormones for sensitive subpopulation groups like pregnant women, infants, and children. Our findings qualitatively suggest the mixed, significant (α = 0.05) TDC associations with natural thyroid hormones (positive or negative sign). Future studies should undertake systematic meta-analyses to elucidate pooled TDC effect estimates on thyroid health indicators and outcomes. PMID:22690712
SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy
Stroom, J; Vieira, S; Greco, C
2014-06-01
Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coverage was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are
A Guideline for Applying Systematic Reviews to Child Language Intervention
ERIC Educational Resources Information Center
Hargrove, Patricia; Lund, Bonnie; Griffer, Mona
2005-01-01
This article focuses on applying systematic reviews to the Early Intervention (EI) literature. Systematic reviews are defined and differentiated from traditional, or narrative, reviews and from meta-analyses. In addition, the steps involved in critiquing systematic reviews and an illustration of a systematic review from the EI literature are…
Uncertainties in global ocean surface heat flux climatologies derived from ship observations
Gleckler, P.J.; Weare, B.C.
1995-08-01
A methodology to define uncertainties associated with ocean surface heat flux calculations has been developed and applied to a revised version of the Oberhuber global climatology, which utilizes a summary of the COADS surface observations. Systematic and random uncertainties in the net oceanic heat flux and each of its four components at individual grid points and for zonal averages have been estimated for each calendar month and the annual mean. The most important uncertainties of the 2{degree} x 2{degree} grid cell values of each of the heat fluxes are described. Annual mean net shortwave flux random uncertainties associated with errors in estimating cloud cover in the tropics yield total uncertainties which are greater than 25 W m{sup {minus}2}. In the northern latitudes, where the large number of observations substantially reduce the influence of these random errors, the systematic uncertainties in the utilized parameterization are largely responsible for total uncertainties in the shortwave fluxes which usually remain greater than 10 W m{sup {minus}2}. Systematic uncertainties dominate in the zonal means because spatial averaging has led to a further reduction of the random errors. The situation for the annual mean latent heat flux is somewhat different in that even for grid point values the contributions of the systematic uncertainties tend to be larger than those of the random uncertainties at most all latitudes. Latent heat flux uncertainties are greater than 20 W m{sup {minus}2} nearly everywhere south of 40{degree}N, and in excess of 30 W m{sup {minus}2} over broad areas of the subtropics, even those with large numbers of observations. Resulting zonal mean latent heat flux uncertainties are largest ({approximately}30 W m{sup {minus}2}) in the middle latitudes and subtropics and smallest ({approximately}10--25 W m{sup {minus}2}) near the equator and over the northernmost regions.
Uncertainty of the beam energy measurement in the e+e- collision using Compton backscattering
NASA Astrophysics Data System (ADS)
Mo, Xiao-Hu
2014-10-01
The beam energy is measured in the e+e- collision by using Compton backscattering. The uncertainty of this measurement process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with the Monte Carlo sampling technique. These quantitative conclusions are especially important for understanding the uncertainty of the beam energy measurement system.
Using Models that Incorporate Uncertainty
ERIC Educational Resources Information Center
Caulkins, Jonathan P.
2002-01-01
In this article, the author discusses the use in policy analysis of models that incorporate uncertainty. He believes that all models should consider incorporating uncertainty, but that at the same time it is important to understand that sampling variability is not usually the dominant driver of uncertainty in policy analyses. He also argues that…
Reviewing the literature, how systematic is systematic?
MacLure, Katie; Paudyal, Vibhu; Stewart, Derek
2016-06-01
Introduction Professor Archibald Cochrane, after whom the Cochrane Collaboration is named, was influential in promoting evidence-based clinical practice. He called for "relevant, valid research" to underpin all aspects of healthcare. Systematic reviews of the literature are regarded as a high quality source of cumulative evidence but it is unclear how truly systematic they, or other review articles, are or 'how systematic is systematic?' Today's evidence-based review industry is a burgeoning mix of specialist terminology, collaborations and foundations, databases, portals, handbooks, tools, criteria and training courses. Aim of the review This study aims to identify uses and types of reviews, key issues in planning, conducting, reporting and critiquing reviews, and factors which limit claims to be systematic. Method A rapid review of review articles published in IJCP. Results This rapid review identified 17 review articles published in IJCP between 2010 and 2015 inclusive. It explored the use of different types of review article, the variation and widely available range of guidelines, checklists and criteria which, through systematic application, aim to promote best practice. It also identified common pitfalls in endeavouring to conduct reviews of the literature systematically. Discussion Although a limited set of IJCP reviews were identified, there is clear evidence of the variation in adoption and application of systematic methods. The burgeoning evidence industry offers the tools and guidelines required to conduct systematic reviews, and other types of review, systematically. This rapid review was limited to the database of one journal over a period of 6 years. Although this review was conducted systematically, it is not presented as a systematic review. Conclusion As a research community we have yet to fully engage with readily available guidelines and tools which would help to avoid the common pitfalls. Therefore the question remains, of not just IJCP but
Analysis of automated highway system risks and uncertainties. Volume 5
Sicherman, A.
1994-10-01
This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Picturing Data With Uncertainty
NASA Technical Reports Server (NTRS)
Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex
2004-01-01
NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi
Satellite altitude determination uncertainties
NASA Technical Reports Server (NTRS)
Siry, J. W.
1972-01-01
Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.
Addressing uncertainty in adaptation planning for agriculture
Vermeulen, Sonja J.; Challinor, Andrew J.; Thornton, Philip K.; Campbell, Bruce M.; Eriyagama, Nishadi; Vervoort, Joost M.; Kinyangi, James; Jarvis, Andy; Läderach, Peter; Ramirez-Villegas, Julian; Nicklin, Kathryn J.; Hawkins, Ed; Smith, Daniel R.
2013-01-01
We present a framework for prioritizing adaptation approaches at a range of timeframes. The framework is illustrated by four case studies from developing countries, each with associated characterization of uncertainty. Two cases on near-term adaptation planning in Sri Lanka and on stakeholder scenario exercises in East Africa show how the relative utility of capacity vs. impact approaches to adaptation planning differ with level of uncertainty and associated lead time. An additional two cases demonstrate that it is possible to identify uncertainties that are relevant to decision making in specific timeframes and circumstances. The case on coffee in Latin America identifies altitudinal thresholds at which incremental vs. transformative adaptation pathways are robust options. The final case uses three crop–climate simulation studies to demonstrate how uncertainty can be characterized at different time horizons to discriminate where robust adaptation options are possible. We find that impact approaches, which use predictive models, are increasingly useful over longer lead times and at higher levels of greenhouse gas emissions. We also find that extreme events are important in determining predictability across a broad range of timescales. The results demonstrate the potential for robust knowledge and actions in the face of uncertainty. PMID:23674681
What's new in atopic eczema? An analysis of systematic reviews published in 2008 and 2009.
Batchelor, J M; Grindlay, D J C; Williams, H C
2010-12-01
This review summarizes clinically important findings from nine systematic reviews of the causes, treatment and prevention of atopic eczema (AE) published between August 2008 and August 2009. Two systematic reviews concluded that there is a strong and consistent association between filaggrin (FLG) mutations and development of eczema. The associations between FLG mutations and atopic sensitization, rhinitis and asthma are weaker than between FLG mutations and eczema, especially if those who also have eczema are excluded. The relationship between transforming growth factor levels in breast milk and eczema development is still unclear. A further systematic review found no strong evidence of a protective effect of exclusive breastfeeding for at least 3 months against eczema, even in those with a positive family history of atopy. Based on a systematic review and meta-analysis of six randomized controlled trials, supplementation with omega-3 and omega-6 oils is unlikely to play an important role in the primary prevention of eczema or allergic diseases in general. There is little evidence to support dietary restrictions of certain foods in unselected children with AE. There is also little evidence to suggest a clinically useful benefit from using probiotics in patients with established eczema. A systematic review of topical pimecrolimus and tacrolimus added little additional information to previous reviews, and did not provide any new data on long-term safety. Both of these drugs work in AE, and may reduce flares and usage of topical corticosteroids; however, there is still uncertainty about how they compare with topical corticosteroids. PMID:20649899
Flight Departure Delay and Rerouting Under Uncertainty in En Route Convective Weather
NASA Technical Reports Server (NTRS)
Mukherjee, Avijit; Grabbe, Shon; Sridhar, Banavar
2011-01-01
Delays caused by uncertainty in weather forecasts can be reduced by improving traffic flow management decisions. This paper presents a methodology for traffic flow management under uncertainty in convective weather forecasts. An algorithm for assigning departure delays and reroutes to aircraft is presented. Departure delay and route assignment are executed at multiple stages, during which, updated weather forecasts and flight schedules are used. At each stage, weather forecasts up to a certain look-ahead time are treated as deterministic and flight scheduling is done to mitigate the impact of weather on four-dimensional flight trajectories. Uncertainty in weather forecasts during departure scheduling results in tactical airborne holding of flights. The amount of airborne holding depends on the accuracy of forecasts as well as the look-ahead time included in the departure scheduling. The weather forecast look-ahead time is varied systematically within the experiments performed in this paper to analyze its effect on flight delays. Based on the results, longer look-ahead times cause higher departure delays and additional flying time due to reroutes. However, the amount of airborne holding necessary to prevent weather incursions reduces when the forecast look-ahead times are higher. For the chosen day of traffic and weather, setting the look-ahead time to 90 minutes yields the lowest total delay cost.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Using data assimilation for systematic model improvement
NASA Astrophysics Data System (ADS)
Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil
2016-04-01
In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval. PMID:17460183
Uncertainty, entropy, and non-Gaussianity for mixed states
NASA Astrophysics Data System (ADS)
Mandilara, Aikaterini; Karpov, Evgueni; Cerf, Nicolas J.
2010-06-01
In the space of mixed states the Schrödinger-Robertson uncertainty relation holds though it can never be saturated. Two tight extensions of this relation in the space of mixed states exist; one proposed by Dodonov and Man'ko, where the lower limit on the uncertainty depends on the purity of the state, and another where the uncertainty is bounded by the von Neumann entropy of the state proposed by Bastiaans. Driven by the needs that have emerged in the field of quantum information, in a recent work we have extended the puritybounded uncertainty relation by adding an additional parameter characterizing the state, namely its degree of non-Gaussianity. In this work we alternatively present a extension of the entropy-bounded uncertainty relation. The common points and differences between the two extensions of the uncertainty relation help us to draw more general conclusions concerning the bounds on the non-Gaussianity of mixed states.
NASA Astrophysics Data System (ADS)
Hobson, Art
2011-10-01
An earlier paper2 introduces quantum physics by means of four experiments: Youngs double-slit interference experiment using (1) a light beam, (2) a low-intensity light beam with time-lapse photography, (3) an electron beam, and (4) a low-intensity electron beam with time-lapse photography. It's ironic that, although these experiments demonstrate most of the quantum fundamentals, conventional pedagogy stresses their difficult and paradoxical nature. These paradoxes (i.e., logical contradictions) vanish, and understanding becomes simpler, if one takes seriously the fact that quantum mechanics is the nonrelativistic limit of our most accurate physical theory, namely quantum field theory, and treats the Schroedinger wave function, as well as the electromagnetic field, as quantized fields.2 Both the Schroedinger field, or "matter field," and the EM field are made of "quanta"—spatially extended but energetically discrete chunks or bundles of energy. Each quantum comes nonlocally from the entire space-filling field and interacts with macroscopic systems such as the viewing screen by collapsing into an atom instantaneously and randomly in accordance with the probability amplitude specified by the field. Thus, uncertainty and nonlocality are inherent in quantum physics. This paper is about quantum uncertainty. A planned later paper will take up quantum nonlocality.
The maintenance of uncertainty
NASA Astrophysics Data System (ADS)
Smith, L. A.
Introduction Preliminaries State-space dynamics Linearized dynamics of infinitesimal uncertainties Instantaneous infinitesimal dynamics Finite-time evolution of infinitesimal uncertainties Lyapunov exponents and predictability The Baker's apprentice map Infinitesimals and predictability Dimensions The Grassberger-Procaccia algorithm Towards a better estimate from Takens' estimators Space-time-separation diagrams Intrinsic limits to the analysis of geometry Takens' theorem The method of delays Noise Prediction, prophecy, and pontification Introduction Simulations, models and physics Ground rules Data-based models: dynamic reconstructions Analogue prediction Local prediction Global prediction Accountable forecasts of chaotic systems Evaluating ensemble forecasts The annulus Prophecies Aids for more reliable nonlinear analysis Significant results: surrogate data, synthetic data and self-deception Surrogate data and the bootstrap Surrogate predictors: Is my model any good? Hints for the evaluation of new techniques Avoiding simple straw men Feasibility tests for the identification of chaos On detecting "tiny" data sets Building models consistent with the observations Cost functions ι-shadowing: Is my model any good? (reprise) Casting infinitely long shadows (out-of-sample) Distinguishing model error and system sensitivity Forecast error and model sensitivity Accountability Residual predictability Deterministic or stochastic dynamics? Using ensembles to distinguish the expectation from the expected Numerical Weather Prediction Probabilistic prediction with a deterministic model The analysis Constructing and interpreting ensembles The outlook(s) for today Conclusion Summary
Antarctic Photochemistry: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Stewart, Richard W.; McConnell, Joseph R.
1999-01-01
Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.
Uncertainty in adaptive capacity
NASA Astrophysics Data System (ADS)
Adger, W. Neil; Vincent, Katharine
2005-03-01
The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).
Probabilistic Mass Growth Uncertainties
NASA Technical Reports Server (NTRS)
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Systematic Alternatives to Proposal Preparation.
ERIC Educational Resources Information Center
Knirk, Frederick G.; And Others
Educators who have to develop proposals must be concerned with making effective decisions. This paper discusses a number of educational systems management tools which can be used to reduce the time and effort in developing a proposal. In addition, ways are introduced to systematically increase the quality of the proposal through the development of…
Earthquake Loss Estimation Uncertainties
NASA Astrophysics Data System (ADS)
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander
2013-04-01
The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity
ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS
Smith, F.; Phifer, M.
2011-06-30
The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in
Uncertainty relation in Schwarzschild spacetime
NASA Astrophysics Data System (ADS)
Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng
2015-04-01
We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.
Summary of long-baseline systematics session at CETUP*2014
Cherdack, Daniel; Worcester, Elizabeth
2015-10-15
A session studying systematics in long-baseline neutrino oscillation physics was held July 14-18, 2014 as part of CETUP* 2014. Systematic effects from flux normalization and modeling, modeling of cross sections and nuclear interactions, and far detector effects were addressed. Experts presented the capabilities of existing and planned tools. A program of study to determine estimates of and requirements for the size of these effects was designed. This document summarizes the results of the CETUP* systematics workshop and the current status of systematic uncertainty studies in long-baseline neutrino oscillation measurements.
Satellite altitude determination uncertainties
NASA Technical Reports Server (NTRS)
Siry, J. W.
1971-01-01
Satellite altitude determination uncertainties are discussed from the standpoint of the GEOS-C satellite. GEOS-C will be tracked by a number of the conventional satellite tracking systems, as well as by two advanced systems; a satellite-to-satellite tracking system and lasers capable of decimeter accuracies which are being developed in connection with the Goddard Earth and Ocean Dynamics Applications program. The discussion is organized in terms of a specific type of GEOS-C orbit which would satisfy a number of scientific objectives including the study of the gravitational field by means of both the altimeter and the satellite-to-satellite tracking system, studies of tides, and the Gulf Stream meanders.
NASA Astrophysics Data System (ADS)
Petzinger, Tom
I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.
Rau, N.; Fong, C.C.; Grigg, C.H.; Silverstein, B.
1994-11-01
In the electric utility industry, only one thing can be guaranteed with absolute certainty: one lives and works with many unknowns. Thus, the industry has embraced probability methods to varying degrees over the last 25 years. These techniques aid decision makers in planning, operations, and maintenance by quantifying uncertainty. Examples include power system reliability, production costing simulation, and assessment of environmental factors. A series of brainstorming sessions was conducted by the Application of Probability Methods (APM) Subcommittee of the IEEE Power Engineering Society to identify research and development needs and to ask the question, ''where should we go from here '' The subcommittee examined areas of need in data development, applications, and methods for decision making. The purpose of this article is to share the thoughts of APM members with a broader audience to the findings and to invite comments and participation.
Minamino, Akihiro
2015-05-15
The Hyper-Kamiokande (Hyper-K) detector is a next generation underground water Chrenkov detector. The J-PARC to Hyper-K experiment has good potential for precision measurements of neutrino oscillation parameters and discovery reach for CP violation in the lepton sector. With a total exposure of 10 years to a neutrino beam produced by the 750 kW J-PARC proton synchrotron, it is expected that the CP phase δ can be determined to better than 18 degree for all possible values of δ if sin{sup 2} 2θ{sub 13} > 0.03 and the mass hierarchy is known. Control of systematic uncertainties is critical to make maximum use of the Hyper-K potential. Based on learning from T2K experience, a strategy to reduce systematic uncertainties in J-PARC/Hyper-K are developed.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.; Ratzlaff, Pete; Siemiginowska, Aneta E-mail: vkashyap@cfa.harvard.edu E-mail: rpete@head.cfa.harvard.edu
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Supporting qualified database for uncertainty evaluation
Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; D'Auria, F.
2012-07-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QR' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering Handbook) of the input nodalization
NASA Astrophysics Data System (ADS)
Jordan, Michelle
Uncertainty is ubiquitous in life, and learning is an activity particularly likely to be fraught with uncertainty. Previous research suggests that students and teachers struggle in their attempts to manage the psychological experience of uncertainty and that students often fail to experience uncertainty when uncertainty may be warranted. Yet, few educational researchers have explicitly and systematically observed what students do, their behaviors and strategies, as they attempt to manage the uncertainty they experience during academic tasks. In this study I investigated how students in one fifth grade class managed uncertainty they experienced while engaged in collaborative robotics engineering projects, focusing particularly on how uncertainty management was influenced by task structure and students' interactions with their peer collaborators. The study was initiated at the beginning of instruction related to robotics engineering and preceded through the completion of several long-term collaborative robotics projects, one of which was a design project. I relied primarily on naturalistic observation of group sessions, semi-structured interviews, and collection of artifacts. My data analysis was inductive and interpretive, using qualitative discourse analysis techniques and methods of grounded theory. Three theoretical frameworks influenced the conception and design of this study: community of practice, distributed cognition, and complex adaptive systems theory. Uncertainty was a pervasive experience for the students collaborating in this instructional context. Students experienced uncertainty related to the project activity and uncertainty related to the social system as they collaborated to fulfill the requirements of their robotics engineering projects. They managed their uncertainty through a diverse set of tactics for reducing, ignoring, maintaining, and increasing uncertainty. Students experienced uncertainty from more different sources and used more and
Impact of discharge data uncertainty on nutrient load uncertainty
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars
2016-04-01
Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.
Effects of Upstream Turbulence on Measurement Uncertainty of Flow Rate by Venturi
NASA Astrophysics Data System (ADS)
Lee, Jungho; Yoon, Seok Ho; Yu, Cheong-Hwan; Park, Sang-Jin; Chung, Chang-Hwan
2010-06-01
Venturi has been widely used for measuring flow rate in a variety of engineering applications since pressure loss is relatively small compared with other measuring method. The current study focuses on making detailed estimation of measured uncertainties as the upstream turbulence affects uncertainty levels of the water flows in the closed-loop testing. Upstream turbulences can be controlled by selecting 9 different swirl generators. Measurement uncertainty of flow rate has been estimated by a quantitative uncertainty analysis which is based on the ANSI/ASME PTC 19.1-2005 standard. The best way to reduce error in measuring flow rate was investigated for evaluating its measurement uncertainty. The results of flow rate uncertainty analysis show that the case with systematic error has higher than that without systematic error. Especially the result with systematic error exhibits that the uncertainty of flow rate was gradually increased by upstream turbulence. Uncertainty of flow rate measurement can be mainly affected by differential pressure and discharge coefficient. Flow disturbance can be also reduced by increasing of the upstream straight length of Venturi.
Uncertainties in the deprojection of the observed bar properties
Zou, Yanfei; Shen, Juntai; Li, Zhao-Yu
2014-08-10
In observations, it is important to deproject the two fundamental quantities characterizing a bar, i.e., its length (a) and ellipticity (e), to face-on values before any careful analyses. However, systematic estimation on the uncertainties of the commonly used deprojection methods is still lacking. Simulated galaxies are well suited in this study. We project two simulated barred galaxies onto a two-dimensional (2D) plane with different bar orientations and disk inclination angles (i). Bar properties are measured and deprojected with the popular deprojection methods in the literature. Generally speaking, deprojection uncertainties increase with increasing i. All of the deprojection methods behave badly when i is larger than 60°, due to the vertical thickness of the bar. Thus, future statistical studies of barred galaxies should exclude galaxies more inclined than 60°. At moderate inclination angles (i ≤ 60°), 2D deprojection methods (analytical and image stretching), and Fourier-based methods (Fourier decomposition and bar-interbar contrast) perform reasonably well with uncertainties ∼10% in both the bar length and ellipticity, whereas the uncertainties of the one-dimensional (1D) analytical deprojection can be as high as 100% in certain extreme cases. We find that different bar measurement methods show systematic differences in the deprojection uncertainties. We further discuss the deprojection uncertainty factors with the emphasis on the most important one, i.e., the three-dimensional structure of the bar itself. We construct two triaxial toy bar models that can qualitatively reproduce the results of the 1D and 2D analytical deprojections; they confirm that the vertical thickness of the bar is the main source of uncertainties.
A review of uncertainty visualization within the IPCC reports
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Reusser, Dominik; Wrobel, Markus
2015-04-01
Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that
Uncertainty and Anticipation in Anxiety
Grupe, Dan W.; Nitschke, Jack B.
2014-01-01
Uncertainty about a possible future threat disrupts our ability to avoid it or to mitigate its negative impact, and thus results in anxiety. Here, we focus the broad literature on the neurobiology of anxiety through the lens of uncertainty. We identify five processes essential for adaptive anticipatory responses to future threat uncertainty, and propose that alterations to the neural instantiation of these processes results in maladaptive responses to uncertainty in pathological anxiety. This framework has the potential to advance the classification, diagnosis, and treatment of clinical anxiety. PMID:23783199
Assessing Groundwater Model Uncertainty for the Central Nevada Test Area
Greg Pohll; Karl Pohlmann; Ahmed Hassan; Jenny Chapman; Todd Mihevc
2002-06-14
The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation.
Uncertainty and Expectation in Sentence Processing: Evidence From Subcategorization Distributions.
Linzen, Tal; Jaeger, T Florian
2016-08-01
There is now considerable evidence that human sentence processing is expectation based: As people read a sentence, they use their statistical experience with their language to generate predictions about upcoming syntactic structure. This study examines how sentence processing is affected by readers' uncertainty about those expectations. In a self-paced reading study, we use lexical subcategorization distributions to factorially manipulate both the strength of expectations and the uncertainty about them. We compare two types of uncertainty: uncertainty about the verb's complement, reflecting the next prediction step; and uncertainty about the full sentence, reflecting an unbounded number of prediction steps. We find that uncertainty about the full structure, but not about the next step, was a significant predictor of processing difficulty: Greater reduction in uncertainty was correlated with increased reading times (RTs). We additionally replicated previously observed effects of expectation violation (surprisal), orthogonal to the effect of uncertainty. This suggests that both surprisal and uncertainty affect human RTs. We discuss the consequences for theories of sentence comprehension. PMID:26286681
Estimating uncertainties in statistics computed from direct numerical simulation
NASA Astrophysics Data System (ADS)
Oliver, Todd A.; Malaya, Nicholas; Ulerich, Rhys; Moser, Robert D.
2014-03-01
Rigorous assessment of uncertainty is crucial to the utility of direct numerical simulation (DNS) results. Uncertainties in the computed statistics arise from two sources: finite statistical sampling and the discretization of the Navier-Stokes equations. Due to the presence of non-trivial sampling error, standard techniques for estimating discretization error (such as Richardson extrapolation) fail or are unreliable. This work provides a systematic and unified approach for estimating these errors. First, a sampling error estimator that accounts for correlation in the input data is developed. Then, this sampling error estimate is used as part of a Bayesian extension of Richardson extrapolation in order to characterize the discretization error. These methods are tested using the Lorenz equations and are shown to perform well. These techniques are then used to investigate the sampling and discretization errors in the DNS of a wall-bounded turbulent flow at Reτ ≈ 180. Both small (Lx/δ × Lz/δ = 4π × 2π) and large (Lx/δ × Lz/δ = 12π × 4π) domain sizes are investigated. For each case, a sequence of meshes was generated by first designing a "nominal" mesh using standard heuristics for wall-bounded simulations. These nominal meshes were then coarsened to generate a sequence of grid resolutions appropriate for the Bayesian Richardson extrapolation method. In addition, the small box case is computationally inexpensive enough to allow simulation on a finer mesh, enabling the results of the extrapolation to be validated in a weak sense. For both cases, it is found that while the sampling uncertainty is large enough to make the order of accuracy difficult to determine, the estimated discretization errors are quite small. This indicates that the commonly used heuristics provide adequate resolution for this class of problems. However, it is also found that, for some quantities, the discretization error is not small relative to sampling error, indicating that the
Kranen, Simon van; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob
2009-04-01
Purpose: To quantify local geometrical uncertainties in anatomical sub-regions during radiotherapy for head-and-neck cancer patients. Methods and Materials: Local setup accuracy was analyzed for 38 patients, who had received intensity-modulated radiotherapy and were regularly scanned during treatment with cone beam computed tomography (CBCT) for offline patient setup correction. In addition to the clinically used large region of interest (ROI), we defined eight ROIs in the planning CT that contained rigid bony structures: the mandible, larynx, jugular notch, occiput bone, vertebrae C1-C3, C3-C5, and C5-C7, and the vertebrae caudal of C7. By local rigid registration to successive CBCT scans, the local setup accuracy of each ROI was determined and compared with the overall setup error assessed with the large ROI. Deformations were distinguished from rigid body movements by expressing movement relative to a reference ROI (vertebrae C1-C3). Results: The offline patient setup correction protocol using the large ROI resulted in residual systematic errors (1 SD) within 1.2 mm and random errors within 1.5 mm for each direction. Local setup errors were larger, ranging from 1.1 to 3.4 mm (systematic) and 1.3 to 2.5 mm (random). Systematic deformations ranged from 0.4 mm near the reference C1-C3 to 3.8 mm for the larynx. Random deformations ranged from 0.5 to 3.6 mm. Conclusion: Head-and-neck cancer patients show considerable local setup variations, exceeding residual global patient setup uncertainty in an offline correction protocol. Current planning target volume margins may be inadequate to account for these uncertainties. We propose registration of multiple ROIs to drive correction protocols and adaptive radiotherapy to reduce the impact of local setup variations.
Climate model uncertainty versus conceptual geological uncertainty in hydrological modeling
NASA Astrophysics Data System (ADS)
Sonnenborg, T. O.; Seifert, D.; Refsgaard, J. C.
2015-09-01
Projections of climate change impact are associated with a cascade of uncertainties including in CO2 emission scenarios, climate models, downscaling and impact models. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project) forced by the same CO2 scenario (A1B). The changes from the reference period (1991-2010) to the future period (2081-2100) in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context-dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty due to the climate models is more important for groundwater hydraulic heads and stream flow.
Climate model uncertainty vs. conceptual geological uncertainty in hydrological modeling
NASA Astrophysics Data System (ADS)
Sonnenborg, T. O.; Seifert, D.; Refsgaard, J. C.
2015-04-01
Projections of climate change impact are associated with a cascade of uncertainties including CO2 emission scenario, climate model, downscaling and impact model. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project) forced by the same CO2 scenario (A1B). The changes from the reference period (1991-2010) to the future period (2081-2100) in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty on the climate models is more important for groundwater hydraulic heads and stream flow.
Demartin, Federico; Mariani, Elisa; Forte, Stefano; Vicini, Alessandro; Rojo, Juan
2010-07-01
We present a systematic study of uncertainties due to parton distributions (PDFs) and the strong coupling on the gluon-fusion production cross section of the standard model Higgs at the Tevatron and LHC colliders. We compare procedures and results when three recent sets of PDFs are used, CTEQ6.6, MSTW08, and NNPDF1.2, and we discuss specifically the way PDF and strong coupling uncertainties are combined. We find that results obtained from different PDF sets are in reasonable agreement if a common value of the strong coupling is adopted. We show that the addition in quadrature of PDF and {alpha}{sub s} uncertainties provides an adequate approximation to the full result with exact error propagation. We discuss a simple recipe to determine a conservative PDF+{alpha}{sub s} uncertainty from available global parton sets, and we use it to estimate this uncertainty on the given process to be about 10% at the Tevatron and 5% at the LHC for a light Higgs.
Analysis of Infiltration Uncertainty
R. McCurley
2003-10-27
The primary objectives of this uncertainty analysis are: (1) to develop and justify a set of uncertain parameters along with associated distributions; and (2) to use the developed uncertain parameter distributions and the results from selected analog site calculations done in ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]) to obtain the net infiltration weighting factors for the glacial transition climate. These weighting factors are applied to unsaturated zone (UZ) flow fields in Total System Performance Assessment (TSPA), as outlined in the ''Total System Performance Assessment-License Application Methods and Approach'' (BSC 2002 [160146], Section 3.1) as a method for the treatment of uncertainty. This report is a scientific analysis because no new and mathematical physical models are developed herein, and it is based on the use of the models developed in or for ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). Any use of the term model refers to those developed in the infiltration numerical model report. TSPA License Application (LA) has included three distinct climate regimes in the comprehensive repository performance analysis for Yucca Mountain: present-day, monsoon, and glacial transition. Each climate regime was characterized using three infiltration-rate maps, including a lower- and upper-bound and a mean value (equal to the average of the two boundary values). For each of these maps, which were obtained based on analog site climate data, a spatially averaged value was also calculated by the USGS. For a more detailed discussion of these infiltration-rate maps, see ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). For this Scientific Analysis Report, spatially averaged values were calculated for the lower-bound, mean, and upper-bound climate analogs only for the glacial transition climate regime, within the
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
Asymmetric Uncertainty Expression for High Gradient Aerodynamics
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T
2012-01-01
When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.
MODIS Radiometric Calibration and Uncertainty Assessment
NASA Technical Reports Server (NTRS)
Xiong, Xiaoxiong; Chiang, Vincent; Sun, Junqiang; Wu, Aisheng
2011-01-01
Since launch, Terra and Aqua MODIS have collected more than II and 9 years of datasets for comprehensive studies of the Earth's land, ocean, and atmospheric properties. MODIS observations are made in 36 spectral bands: 20 reflective solar bands (RSB) and 16 thermal emissive bands (TEB). Compared to its heritage sensors, MODIS was developed with very stringent calibration and uncertainty requirements. As a result, MODIS was designed and built with a set of state of the art on-board calibrators (OBC), which allow key sensor performance parameters and on-orbit calibration coefficients to be monitored and updated if necessary. In terms of its calibration traceability, MODIS RSB calibration is reflectance based using an on-board solar diffuser (SD) and the TEB calibration is radiance based using an on-board blackbody (BB). In addition to on-orbit calibration coefficients derived from its OBC, calibration parameters determined from sensor pre-launch calibration and characterization are used in both the RSB and TEB calibration and retrieval algorithms. This paper provides a brief description of MODIS calibration methodologies and discusses details of its on-orbit calibration uncertainties. It assesses uncertainty contributions from individual components and differences between Terra and Aqua MODIS due to their design characteristics and on-orbit periormance. Also discussed in this paper is the use of MODIS LIB uncertainty index CUI) product.
Uncertainty and risk in wildland fire management: a review.
Thompson, Matthew P; Calkin, Dave E
2011-08-01
Wildland fire management is subject to manifold sources of uncertainty. Beyond the unpredictability of wildfire behavior, uncertainty stems from inaccurate/missing data, limited resource value measures to guide prioritization across fires and resources at risk, and an incomplete scientific understanding of ecological response to fire, of fire behavior response to treatments, and of spatiotemporal dynamics involving disturbance regimes and climate change. This work attempts to systematically align sources of uncertainty with the most appropriate decision support methodologies, in order to facilitate cost-effective, risk-based wildfire planning efforts. We review the state of wildfire risk assessment and management, with a specific focus on uncertainties challenging implementation of integrated risk assessments that consider a suite of human and ecological values. Recent advances in wildfire simulation and geospatial mapping of highly valued resources have enabled robust risk-based analyses to inform planning across a variety of scales, although improvements are needed in fire behavior and ignition occurrence models. A key remaining challenge is a better characterization of non-market resources at risk, both in terms of their response to fire and how society values those resources. Our findings echo earlier literature identifying wildfire effects analysis and value uncertainty as the primary challenges to integrated wildfire risk assessment and wildfire management. We stress the importance of identifying and characterizing uncertainties in order to better quantify and manage them. Leveraging the most appropriate decision support tools can facilitate wildfire risk assessment and ideally improve decision-making. PMID:21489684
Uncertainty in simulating wheat yields under climate change
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.
2013-09-01
Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
Uncertainty in Simulating Wheat Yields Under Climate Change
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.
2013-01-01
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.
Uncertainty in Simulating Wheat Yields Under Climate Change
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Hatfield, Jerry; Ruane, Alex; Boote, K. J.; Thorburn, Peter; Rotter, R.P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P.K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, AJ; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, Robert; Heng, L.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, Roberto C.; Kersebaum, K.C.; Mueller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.O.; Olesen, JE; Osborne, T.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stockle, Claudio O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.
2013-09-01
Anticipating the impacts of climate change on crop yields is critical for assessing future food security. Process-based crop simulation models are the most commonly used tools in such assessments1,2. Analysis of uncertainties in future greenhouse gas emissions and their impacts on future climate change has been increasingly described in the literature3,4 while assessments of the uncertainty in crop responses to climate change are very rare. Systematic and objective comparisons across impact studies is difficult, and thus has not been fully realized5. Here we present the largest coordinated and standardized crop model intercomparison for climate change impacts on wheat production to date. We found that several individual crop models are able to reproduce measured grain yields under current diverse environments, particularly if sufficient details are provided to execute them. However, simulated climate change impacts can vary across models due to differences in model structures and algorithms. The crop-model component of uncertainty in climate change impact assessments was considerably larger than the climate-model component from Global Climate Models (GCMs). Model responses to high temperatures and temperature-by-CO2 interactions are identified as major sources of simulated impact uncertainties. Significant reductions in impact uncertainties through model improvements in these areas and improved quantification of uncertainty through multi-model ensembles are urgently needed for a more reliable translation of climate change scenarios into agricultural impacts in order to develop adaptation strategies and aid policymaking.
NASA Astrophysics Data System (ADS)
Xing, Changhu; Jensen, Colby; Ban, Heng; Phillips, Jeffrey
2011-07-01
A technique adapted from the guarded-comparative-longitudinal heat flow method was selected for the measurement of the thermal conductivity of a nuclear fuel compact over a temperature range characteristic of its usage. This technique fulfills the requirement for non-destructive measurement of the composite compact. Although numerous measurement systems have been created based on the guarded-comparative method, comprehensive systematic (bias) and measurement (precision) uncertainty associated with this technique have not been fully analyzed. In addition to the geometric effect in the bias error, which has been analyzed previously, this paper studies the working condition which is another potential error source. Using finite element analysis, this study showed the effect of these two types of error sources in the thermal conductivity measurement process and the limitations in the design selection of various parameters by considering their effect on the precision error. The results and conclusions provide valuable reference for designing and operating an experimental measurement system using this technique.
On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
1997-01-01
In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.
Hydrology, society, change and uncertainty
NASA Astrophysics Data System (ADS)
Koutsoyiannis, Demetris
2014-05-01
Heraclitus, who predicated that "panta rhei", also proclaimed that "time is a child playing, throwing dice". Indeed, change and uncertainty are tightly connected. The type of change that can be predicted with accuracy is usually trivial. Also, decision making under certainty is mostly trivial. The current acceleration of change, due to unprecedented human achievements in technology, inevitably results in increased uncertainty. In turn, the increased uncertainty makes the society apprehensive about the future, insecure and credulous to a developing future-telling industry. Several scientific disciplines, including hydrology, tend to become part of this industry. The social demand for certainties, no matter if these are delusional, is combined by a misconception in the scientific community confusing science with uncertainty elimination. However, recognizing that uncertainty is inevitable and tightly connected with change will help to appreciate the positive sides of both. Hence, uncertainty becomes an important object to study, understand and model. Decision making under uncertainty, developing adaptability and resilience for an uncertain future, and using technology and engineering means for planned change to control the environment are important and feasible tasks, all of which will benefit from advancements in the Hydrology of Uncertainty.
Housing Uncertainty and Childhood Impatience
ERIC Educational Resources Information Center
Anil, Bulent; Jordan, Jeffrey L.; Zahirovic-Herbert, Velma
2011-01-01
The study demonstrates a direct link between housing uncertainty and children's time preferences, or patience. We show that students who face housing uncertainties through mortgage foreclosures and eviction learn impatient behavior and are therefore at greater risk of making poor intertemporal choices such as dropping out of school. We find that…
Mama Software Features: Uncertainty Testing
Ruggiero, Christy E.; Porter, Reid B.
2014-05-30
This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.
Quantification of Emission Factor Uncertainty
Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...
Error and Uncertainty in High-resolution Quantitative Sediment Budgets
NASA Astrophysics Data System (ADS)
Grams, P. E.; Schmidt, J. C.; Topping, D. J.; Yackulic, C. B.
2012-12-01
Sediment budgets are a fundamental tool in fluvial geomorphology. The power of the sediment budget is in the explicit coupling of sediment flux and sediment storage through the Exner equation for bed sediment conservation. Thus, sediment budgets may be calculated either from the divergence of the sediment flux or from measurements of morphologic change. Until recently, sediment budgets were typically calculated using just one of these methods, and often with sparse data. Recent advances in measurement methods for sediment transport have made it possible to measure sediment flux at much higher temporal resolution, while advanced methods for high-resolution topographic and bathymetric mapping have made it possible to measure morphologic change with much greater spatial resolution. Thus, it is now possible to measure all terms of a sediment budget and more thoroughly evaluate uncertainties in measurement methods and sampling strategies. However, measurements of sediment flux and morphologic change involve different types of uncertainty that are encountered over different time and space scales. Three major factors contribute uncertainty to sediment budgets computed from measurements of sediment flux. These are measurement error, the accumulation of error over time, and physical processes that cause systematic bias. In the absence of bias, uncertainty is proportional to measurement error and the ratio of fluxes at the two measurement stations. For example, if the ratio between measured sediment fluxes is more than 0.8, measurement uncertainty must be less than 10 percent in order to calculate a meaningful sediment budget. Systematic bias in measurements of flux can introduce much larger uncertainty. The uncertainties in sediment budgets computed from morphologic measurements fall into three similar categories. These are measurement error, the spatial and temporal propagation of error, and physical processes that cause bias when measurements are interpolated or
Decisions on new product development under uncertainties
NASA Astrophysics Data System (ADS)
Huang, Yeu-Shiang; Liu, Li-Chen; Ho, Jyh-Wen
2015-04-01
In an intensively competitive market, developing a new product has become a valuable strategy for companies to establish their market positions and enhance their competitive advantages. Therefore, it is essential to effectively manage the process of new product development (NPD). However, since various problems may arise in NPD projects, managers should set up some milestones and subsequently construct evaluative mechanisms to assess their feasibility. This paper employed the approach of Bayesian decision analysis to deal with the two crucial uncertainties for NPD, which are the future market share and the responses of competitors. The proposed decision process can provide a systematic analytical procedure to determine whether an NPD project should be continued or not under the consideration of whether effective usage is being made of the organisational resources. Accordingly, the proposed decision model can assist the managers in effectively addressing the NPD issue under the competitive market.
Uncertainty in Integrated Assessment Scenarios
Mort Webster
2005-10-17
The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean
Reformulating the Quantum Uncertainty Relation.
Li, Jun-Li; Qiao, Cong-Feng
2015-01-01
Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space. PMID:26234197
Reformulating the Quantum Uncertainty Relation
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Qiao, Cong-Feng
2015-08-01
Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the “triviality” problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.
Mutualism Disruption Threatens Global Plant Biodiversity: A Systematic Review
Aslan, Clare E.; Zavaleta, Erika S.; Tershy, Bernie; Croll, Donald
2013-01-01
Background As global environmental change accelerates, biodiversity losses can disrupt interspecific interactions. Extinctions of mutualist partners can create “widow” species, which may face reduced ecological fitness. Hypothetically, such mutualism disruptions could have cascading effects on biodiversity by causing additional species coextinctions. However, the scope of this problem – the magnitude of biodiversity that may lose mutualist partners and the consequences of these losses – remains unknown. Methodology/Principal Findings We conducted a systematic review and synthesis of data from a broad range of sources to estimate the threat posed by vertebrate extinctions to the global biodiversity of vertebrate-dispersed and -pollinated plants. Though enormous research gaps persist, our analysis identified Africa, Asia, the Caribbean, and global oceanic islands as geographic regions at particular risk of disruption of these mutualisms; within these regions, percentages of plant species likely affected range from 2.1–4.5%. Widowed plants are likely to experience reproductive declines of 40–58%, potentially threatening their persistence in the context of other global change stresses. Conclusions Our systematic approach demonstrates that thousands of species may be impacted by disruption in one class of mutualisms, but extinctions will likely disrupt other mutualisms, as well. Although uncertainty is high, there is evidence that mutualism disruption directly threatens significant biodiversity in some geographic regions. Conservation measures with explicit focus on mutualistic functions could be necessary to bolster populations of widowed species and maintain ecosystem functions. PMID:23840571
Uncertainty in monitoring E. coli concentrations in streams and stormwater runoff
NASA Astrophysics Data System (ADS)
Harmel, R. D.; Hathaway, J. M.; Wagner, K. L.; Wolfe, J. E.; Karthikeyan, R.; Francesconi, W.; McCarthy, D. T.
2016-03-01
Microbial contamination of surface waters, a substantial public health concern throughout the world, is typically identified by fecal indicator bacteria such as Escherichia coli. Thus, monitoring E. coli concentrations is critical to evaluate current conditions, determine restoration effectiveness, and inform model development and calibration. An often overlooked component of these monitoring and modeling activities is understanding the inherent random and systematic uncertainty present in measured data. In this research, a review and subsequent analysis was performed to identify, document, and analyze measurement uncertainty of E. coli data collected in stream flow and stormwater runoff as individual discrete samples or throughout a single runoff event. Data on the uncertainty contributed by sample collection, sample preservation/storage, and laboratory analysis in measured E. coli concentrations were compiled and analyzed, and differences in sampling method and data quality scenarios were compared. The analysis showed that: (1) manual integrated sampling produced the lowest random and systematic uncertainty in individual samples, but automated sampling typically produced the lowest uncertainty when sampling throughout runoff events; (2) sample collection procedures often contributed the highest amount of uncertainty, although laboratory analysis introduced substantial random uncertainty and preservation/storage introduced substantial systematic uncertainty under some scenarios; and (3) the uncertainty in measured E. coli concentrations was greater than that of sediment and nutrients, but the difference was not as great as may be assumed. This comprehensive analysis of uncertainty in E. coli concentrations measured in streamflow and runoff should provide valuable insight for designing E. coli monitoring projects, reducing uncertainty in quality assurance efforts, regulatory and policy decision making, and fate and transport modeling.
Communicating Storm Surge Forecast Uncertainty
NASA Astrophysics Data System (ADS)
Troutman, J. A.; Rhome, J.
2015-12-01
When it comes to tropical cyclones, storm surge is often the greatest threat to life and property along the coastal United States. The coastal population density has dramatically increased over the past 20 years, putting more people at risk. Informing emergency managers, decision-makers and the public about the potential for wind driven storm surge, however, has been extremely difficult. Recently, the Storm Surge Unit at the National Hurricane Center in Miami, Florida has developed a prototype experimental storm surge watch/warning graphic to help communicate this threat more effectively by identifying areas most at risk for life-threatening storm surge. This prototype is the initial step in the transition toward a NWS storm surge watch/warning system and highlights the inundation levels that have a 10% chance of being exceeded. The guidance for this product is the Probabilistic Hurricane Storm Surge (P-Surge) model, which predicts the probability of various storm surge heights by statistically evaluating numerous SLOSH model simulations. Questions remain, however, if exceedance values in addition to the 10% may be of equal importance to forecasters. P-Surge data from 2014 Hurricane Arthur is used to ascertain the practicality of incorporating other exceedance data into storm surge forecasts. Extracting forecast uncertainty information through analyzing P-surge exceedances overlaid with track and wind intensity forecasts proves to be beneficial for forecasters and decision support.
An active learning approach with uncertainty, representativeness, and diversity.
He, Tianxu; Zhang, Shukui; Xin, Jie; Zhao, Pengpeng; Wu, Jian; Xian, Xuefeng; Li, Chunhua; Cui, Zhiming
2014-01-01
Big data from the Internet of Things may create big challenge for data classification. Most active learning approaches select either uncertain or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative and fail to take the diversity of instances into account. We address this challenge by presenting a new active learning framework which considers uncertainty, representativeness, and diversity creation. The proposed approach provides a systematic way for measuring and combining the uncertainty, representativeness, and diversity of an instance. Firstly, use instances' uncertainty and representativeness to constitute the most informative set. Then, use the kernel k-means clustering algorithm to filter the redundant samples and the resulting samples are queried for labels. Extensive experimental results show that the proposed approach outperforms several state-of-the-art active learning approaches. PMID:25180208
An Active Learning Approach with Uncertainty, Representativeness, and Diversity
He, Tianxu; Zhang, Shukui; Xin, Jie; Xian, Xuefeng; Li, Chunhua; Cui, Zhiming
2014-01-01
Big data from the Internet of Things may create big challenge for data classification. Most active learning approaches select either uncertain or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative and fail to take the diversity of instances into account. We address this challenge by presenting a new active learning framework which considers uncertainty, representativeness, and diversity creation. The proposed approach provides a systematic way for measuring and combining the uncertainty, representativeness, and diversity of an instance. Firstly, use instances' uncertainty and representativeness to constitute the most informative set. Then, use the kernel k-means clustering algorithm to filter the redundant samples and the resulting samples are queried for labels. Extensive experimental results show that the proposed approach outperforms several state-of-the-art active learning approaches. PMID:25180208
A neural representation of categorization uncertainty in the human brain.
Grinband, Jack; Hirsch, Joy; Ferrera, Vincent P
2006-03-01
The ability to classify visual objects into discrete categories ("friend" versus "foe"; "edible" versus "poisonous") is essential for survival and is a fundamental cognitive function. The cortical substrates that mediate this function, however, have not been identified in humans. To identify brain regions involved in stimulus categorization, we developed a task in which subjects classified stimuli according to a variable categorical boundary. Psychophysical functions were used to define a decision variable, categorization uncertainty, which was systematically manipulated. Using event-related functional MRI, we discovered that activity in a fronto-striatal-thalamic network, consisting of the medial frontal gyrus, anterior insula, ventral striatum, and dorsomedial thalamus, was modulated by categorization uncertainty. We found this network to be distinct from the frontoparietal attention network, consisting of the frontal and parietal eye fields, where activity was not correlated with categorization uncertainty. PMID:16504950
Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps
Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.
1996-01-04
The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.
Assessing hydrologic prediction uncertainty resulting from soft land cover classification
NASA Astrophysics Data System (ADS)
Loosvelt, Lien; De Baets, Bernard; Pauwels, Valentijn R. N.; Verhoest, Niko E. C.
2014-09-01
For predictions in ungauged basins (PUB), environmental data is generally not available and needs to be inferred by indirect means. Existing technologies such as remote sensing are valuable tools for estimating the lacking data, as these technologies become more widely available and have a high areal coverage. However, indirect estimates of the environmental characteristics are prone to uncertainty. Hence, an improved understanding of the quality of the estimates and the development of methods for dealing with their associated uncertainty are essential to evolve towards accurate PUB. In this study, the impact of the uncertainty associated with the classification of land cover based on multi-temporal SPOT imagery, resulting from the use of the Random Forests classifier, on the predictions of the hydrologic model TOPLATS is investigated through a Monte Carlo simulation. The results show that the predictions of evapotranspiration, runoff and baseflow are hardly affected by the classification uncertainty when area-averaged predictions are intended, implying that uncertainty propagation is only advisable in case a spatial distribution of the predictions is relevant for decision making or is coupled to other spatially distributed models. Based on the resulting uncertainty map, guidelines for additional data collection are formulated in order to reduce the uncertainty for future model applications. Because a Monte Carlo-based uncertainty analysis is computationally very demanding, especially when complex models are involved, we developed a fast indicative uncertainty assessment method that allows for generating proxies of the Monte Carlo-based result in terms of the mean prediction and its associated uncertainty based on a single model evaluation. These proxies are shown to perform well and provide a good indication of the impact of classification uncertainty on the prediction result.
Uncertainty quantification in nanomechanical measurements using the atomic force microscope
NASA Astrophysics Data System (ADS)
Wagner, Ryan; Moon, Robert; Pratt, Jon; Shaw, Gordon; Raman, Arvind
2011-11-01
Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale resolution of both inorganic and biological surfaces and nanomaterials. We present a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements. We demonstrate the framework by quantifying uncertainty in AFM-based measurements of the transverse elastic modulus of cellulose nanocrystals (CNCs), an abundant, plant-derived nanomaterial whose mechanical properties are comparable to Kevlar fibers. For a single, isolated CNC the transverse elastic modulus was found to have a mean of 8.1 GPa and a 95% confidence interval of 2.7-20 GPa. A key result is that multiple replicates of force-distance curves do not sample the important sources of uncertainty, which are systematic in nature. The dominant source of uncertainty is the nondimensional photodiode sensitivity calibration rather than the cantilever stiffness or Z-piezo calibrations. The results underscore the great need for, and open a path towards, quantifying and minimizing uncertainty in AFM-based material property measurements of nanoparticles, nanostructured surfaces, thin films, polymers and biomaterials. This work is a partial contribution of the USDA Forest Service and NIST, agencies of the US government, and is not subject to copyright.
Estimating Uncertainties in Statistics Computed from DNS
NASA Astrophysics Data System (ADS)
Malaya, Nicholas; Oliver, Todd; Ulerich, Rhys; Moser, Robert
2013-11-01
Rigorous assessment of uncertainty is crucial to the utility of DNS results. Uncertainties in the computed statistics arise from two sources: finite sampling and the discretization of the Navier-Stokes equations. Due to the presence of non-trivial sampling error, standard techniques for estimating discretization error (such as Richardson Extrapolation) fail or are unreliable. This talk provides a systematic and unified approach for estimating these errors. First, a sampling error estimator that accounts for correlation in the input data is developed. Then, this sampling error estimate is used as an input to a probabilistic extension of Richardson extrapolation in order to characterize the discretization error. These techniques are used to investigate the sampling and discretization errors in the DNS of a wall-bounded turbulent flow at Reτ = 180. We will show a well-resolved DNS simulation which, for the centerline velocity, possesses 0.02% sampling error and discretization errors of 0.003%. These results imply that standard resolution heuristics for DNS accurately predict required grid sizes. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
Uncertainty, energy, and multiple-valued logics
Hayes, J.P.
1986-02-01
The multiple-valued logics obtained by introducing uncertainty and energy considerations into classical switching theory are studied in this paper. First, the nature of uncertain or unknown signals is examined, and two general uncertainty types called U-values and P-values are identified. It is shown that multiple-valued logics composed of U/P-values can be systematically derived from 2-valued Boolean algebra. These are useful for timing and hazard analysis, and provide a rigorous framework for designing gate-level logic simulation programs. Next, signals of the form (..nu..,S) are considered where ..nu.. and S denote logic level and strength, respectively, and the product vs corresponds to energy flow or power. It is shown that these signals from a type of lattice called a Pseudo-Boolean algebra. Such algebras characterize the behavior of digital circuits at a level (the switch level) intermediate between the conventional analog and logical levels. They provide the mathematical basis for an efficient new class of switch-level simulation programs used in MOS VLSI design.
Amplification uncertainty relation for probabilistic amplifiers
NASA Astrophysics Data System (ADS)
Namiki, Ryo
2015-09-01
Traditionally, quantum amplification limit refers to the property of inevitable noise addition on canonical variables when the field amplitude of an unknown state is linearly transformed through a quantum channel. Recent theoretical studies have determined amplification limits for cases of probabilistic quantum channels or general quantum operations by specifying a set of input states or a state ensemble. However, it remains open how much excess noise on canonical variables is unavoidable and whether there exists a fundamental trade-off relation between the canonical pair in a general amplification process. In this paper we present an uncertainty-product form of amplification limits for general quantum operations by assuming an input ensemble of Gaussian-distributed coherent states. It can be derived as a straightforward consequence of canonical uncertainty relations and retrieves basic properties of the traditional amplification limit. In addition, our amplification limit turns out to give a physical limitation on probabilistic reduction of an Einstein-Podolsky-Rosen uncertainty. In this regard, we find a condition that probabilistic amplifiers can be regarded as local filtering operations to distill entanglement. This condition establishes a clear benchmark to verify an advantage of non-Gaussian operations beyond Gaussian operations with a feasible input set of coherent states and standard homodyne measurements.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Boardman, J.; Jones, J.A.; Harper, F.T.; Young, M.L.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on early health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Quantifying and Qualifying USGS ShakeMap Uncertainty
Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent
2008-01-01
We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions
Latin hypercube approach to estimate uncertainty in ground water vulnerability.
Gurdak, Jason J; McCray, John E; Thyne, Geoffrey; Qi, Sharon L
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. PMID:17470124
A posteriori uncertainty quantification of PIV-based pressure data
NASA Astrophysics Data System (ADS)
Azijli, Iliass; Sciacchitano, Andrea; Ragni, Daniele; Palha, Artur; Dwight, Richard P.
2016-05-01
A methodology for a posteriori uncertainty quantification of pressure data retrieved from particle image velocimetry (PIV) is proposed. It relies upon the Bayesian framework, where the posterior distribution (probability distribution of the true velocity, given the PIV measurements) is obtained from the prior distribution (prior knowledge of properties of the velocity field, e.g., divergence-free) and the statistical model of PIV measurement uncertainty. Once the posterior covariance matrix of the velocity is known, it is propagated through the discretized Poisson equation for pressure. Numerical assessment of the proposed method on a steady Lamb-Oseen vortex shows excellent agreement with Monte Carlo simulations, while linear uncertainty propagation underestimates the uncertainty in the pressure by up to 30 %. The method is finally applied to an experimental test case of a turbulent boundary layer in air, obtained using time-resolved tomographic PIV. Simultaneously with the PIV measurements, microphone measurements were carried out at the wall. The pressure reconstructed from the tomographic PIV data is compared to the microphone measurements. Realizing that the uncertainty of the latter is significantly smaller than the PIV-based pressure, this allows us to obtain an estimate for the true error of the former. The comparison between true error and estimated uncertainty demonstrates the accuracy of the uncertainty estimates on the pressure. In addition, enforcing the divergence-free constraint is found to result in a significantly more accurate reconstructed pressure field. The estimated uncertainty confirms this result.
Dimensionality reduction for uncertainty quantification of nuclear engineering models.
Roderick, O.; Wang, Z.; Anitescu, M.
2011-01-01
The task of uncertainty quantification consists of relating the available information on uncertainties in the model setup to the resulting variation in the outputs of the model. Uncertainty quantification plays an important role in complex simulation models of nuclear engineering, where better understanding of uncertainty results in greater confidence in the model and in the improved safety and efficiency of engineering projects. In our previous work, we have shown that the effect of uncertainty can be approximated by polynomial regression with derivatives (PRD): a hybrid regression method that uses first-order derivatives of the model output as additional fitting conditions for a polynomial expansion. Numerical experiments have demonstrated the advantage of this approach over classical methods of uncertainty analysis: in precision, computational efficiency, or both. To obtain derivatives, we used automatic differentiation (AD) on the simulation code; hand-coded derivatives are acceptable for simpler models. We now present improvements on the method. We use a tuned version of the method of snapshots, a technique based on proper orthogonal decomposition (POD), to set up the reduced order representation of essential information on uncertainty in the model inputs. The automatically obtained sensitivity information is required to set up the method. Dimensionality reduction in combination with PRD allows analysis on a larger dimension of the uncertainty space (>100), at modest computational cost.
Uncertainties of Mayak urine data
Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir
2008-01-01
For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3 to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.
Krayer von Krauss, Martin Paul; Kaiser, Matthias; Almaas, Vibeke; van der Sluijs, Jeroen; Kloprogge, Penny
2008-02-01
Uncertainty often becomes problematic when science is used to support decision making in the policy process. Scientists can contribute to a more constructive approach to uncertainty by making their uncertainties transparent. In this article, an approach to systematic uncertainty diagnosis is demonstrated on the case study of transgene silencing and GMO risk assessment. Detailed interviews were conducted with five experts on transgene silencing to obtain quantitative and qualitative information on their perceptions of the uncertainty characterising our knowledge of the phenomena. The results indicate that there are competing interpretations of the cause-effect relationships leading to gene silencing (model structure uncertainty). In particular, the roles of post-transcriptional gene silencing, position effects, DNA-DNA interactions, direct-repeat DNA structures, recognition factors and dsRNA and aberrant zRNA are debated. The study highlights several sources of uncertainty beyond the statistical uncertainty commonly reported in risk assessment. The results also reveal a discrepancy between the way in which uncertainties would be prioritized on the basis of the uncertainty analysis conducted, and the way in which they would be prioritized on the basis of expert willingness to pay to eliminate uncertainty. The results also reveal a diversity of expert opinions on the uncertainty characterizing transgene silencing. Because the methodology used to diagnose uncertainties was successful in revealing a broad spectrum of uncertainties as well as a diversity of expert opinion, it is concluded that the methodology used could contribute to increasing transparency and fostering a critical discussion on uncertainty in the decision making process. PMID:17988720
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at NASA-Lewis, and consists of five program elements: (1) probabilistic loads, (2) probabilistic finite element analysis, (3) probabilistic material behavior, (4) assessment of reliability and risk, and (5) probabilistic structural performance evaluation. Attention is given to quantification of the effects of uncertainties for several variables on High Pressure Fuel Turbopump blade temperature, pressure, and torque of the Space Shuttle Main Engine; the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; evaluation of the failure probability; reliability and risk-cost assessment; and an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
On solar geoengineering and climate uncertainty
NASA Astrophysics Data System (ADS)
MacMartin, Douglas G.; Kravitz, Ben; Rasch, Philip J.
2015-09-01
Uncertain climate system response has been raised as a concern regarding solar geoengineering. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the intermodel spread across 12 climate models participating in the Geoengineering Model Intercomparison project. The model spread in simulations of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. That is, the model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. Furthermore, differences between models in their efficacy (the relative global mean temperature effect of solar versus CO2 radiative forcing) explain most of the regional differences between models in their response to an increased CO2 concentration that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks regarding uncertainty.
On solar geoengineering and climate uncertainty
MacMartin, Douglas; Kravitz, Benjamin S.; Rasch, Philip J.
2015-09-03
Uncertainty in the climate system response has been raised as a concern regarding solar geoengineering. Here we show that model projections of regional climate change outcomes may have greater agreement under solar geoengineering than with CO2 alone. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the inter-model spread across 12 climate models participating in the Geoengineering Model Intercomparison project (GeoMIP). The model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. That is, the intermodel spread in predictions of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. Furthermore, differences in efficacy explain most of the differences between models in their temperature response to an increase in CO2 that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks.
NASA Astrophysics Data System (ADS)
Zheng, Yi; Keller, Arturo A.
2007-08-01
Watershed-scale water quality models involve substantial uncertainty in model output because of sparse water quality observations and other sources of uncertainty. Assessing the uncertainty is very important for those who use the models to support management decision making. Systematic uncertainty analysis for these models has rarely been done and remains a major challenge. This study aimed (1) to develop a framework to characterize all important sources of uncertainty and their interactions in management-oriented watershed modeling, (2) to apply the generalized likelihood uncertainty estimation (GLUE) approach for quantifying simulation uncertainty for complex watershed models, and (3) to investigate the influence of subjective choices (especially the likelihood measure) in a GLUE analysis, as well as the availability of observational data, on the outcome of the uncertainty analysis. A two-stage framework was first established as the basis for uncertainty assessment and probabilistic decision-making. A watershed model (watershed analysis risk management framework (WARMF)) was implemented using data from the Santa Clara River Watershed in southern California. A typical catchment was constructed on which a series of experiments was conducted. The results show that GLUE can be implemented with affordable computational cost, yielding insights into the model behavior. However, in complex watershed water quality modeling, the uncertainty results highly depend on the subjective choices made by the modeler as well as the availability of observational data. The importance of considering management concerns in the uncertainty estimation was also demonstrated. Overall, this study establishes guidance for uncertainty assessment in management-oriented watershed modeling. The study results have suggested future efforts we could make in a GLUE-based uncertainty analysis, which has led to the development of a new method, as will be introduced in a companion paper. Eventually, the
Credible Computations: Standard and Uncertainty
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)
1995-01-01
The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties
Analysis and reduction of chemical models under uncertainty.
Oxberry, Geoff; Debusschere, Bert J.; Najm, Habib N.
2008-08-01
While models of combustion processes have been successful in developing engines with improved fuel economy, more costly simulations are required to accurately model pollution chemistry. These simulations will also involve significant parametric uncertainties. Computational singular perturbation (CSP) and polynomial chaos-uncertainty quantification (PC-UQ) can be used to mitigate the additional computational cost of modeling combustion with uncertain parameters. PC-UQ was used to interrogate and analyze the Davis-Skodje model, where the deterministic parameter in the model was replaced with an uncertain parameter. In addition, PC-UQ was combined with CSP to explore how model reduction could be combined with uncertainty quantification to understand how reduced models are affected by parametric uncertainty.
Managing uncertainty in family practice.
Biehn, J.
1982-01-01
Because patients present in the early stages of undifferentiated problems, the family physician often faces uncertainty, especially in diagnosis and management. The physician's uncertainty may be unacceptable to the patient and may lead to inappropriate use of diagnostic procedures. The problem is intensified by the physician's hospital training, which emphasizes mastery of available knowledge and decision-making based on certainty. Strategies by which a physician may manage uncertainty include (a) a more open doctor-patient relationship, (b) understanding the patient's reason for attending the office, (c) a thorough assessment of the problem, (d) a commitment to reassessment and (e) appropriate consultation. PMID:7074488
Uncertainties in large space systems
NASA Technical Reports Server (NTRS)
Fuh, Jon-Shen
1988-01-01
Uncertainties of a large space system (LSS) can be deterministic or stochastic in nature. The former may result in, for example, an energy spillover problem by which the interaction between unmodeled modes and controls may cause system instability. The stochastic uncertainties are responsible for mode localization and estimation errors, etc. We will address the effects of uncertainties on structural model formulation, use of available test data to verify and modify analytical models before orbiting, and how the system model can be further improved in the on-orbit environment.
Quantum Cryptography Without Quantum Uncertainties
NASA Astrophysics Data System (ADS)
Durt, Thomas
2002-06-01
Quantum cryptography aims at transmitting a random key in such a way that the presence of a spy eavesdropping the communication would be revealed by disturbances in the transmission of the message. In standard quantum cryptography, this unavoidable disturbance is a consequence of the uncertainty principle of Heisenberg. We propose in this paper to replace quantum uncertainties by generalised, technological uncertainties, and discuss the realisability of such an idea. The proposed protocol can be considered as a simplification, but also as a generalisation of the standard quantum cryptographic protocols.
ERIC Educational Resources Information Center
Jordan, Michelle E.; Babrow, Austin S.
2013-01-01
This study offers a systematic analysis of uncertainty in communication education by examining communication goals and challenges in the context of collaborative creative problem-solving in engineering assignments. Engineering design projects are seen as having the potential to help K-12 students learn to deal with uncertainty as well as a means…
Systematic errors in precipitation measurements with different rain gauge sensors
NASA Astrophysics Data System (ADS)
Sungmin, O.; Foelsche, Ulrich
2015-04-01
Ground-level rain gauges provide the most direct measurement of precipitation and therefore such precipitation measurement datasets are often utilized for the evaluation of precipitation estimates via remote sensing and in climate model simulations. However, measured precipitation by means of national standard gauge networks is constrained by their spatial density. For this reason, in order to accurately measure precipitation it is of essential importance to understand the performance and reliability of rain gauges. This study is aimed to assess the systematic errors between measurements taken with different rain gauge sensors. We will mainly address extreme precipitation events as these are connected with high uncertainties in the measurements. Precipitation datasets for the study are available from WegenerNet, a dense network of 151 meteorological stations within an area of about 20 km × 15 km centred near the city of Feldbach in the southeast of Austria. The WegenerNet has a horizontal resolution of about 1.4-km and employs 'tripping bucket' rain gauges for precipitation measurements with three different types of sensors; a reference station provides measurements from all types of sensors. The results will illustrate systematic errors via the comparison of the precipitation datasets gained with different types of sensors. The analyses will be carried out by direct comparison between the datasets from the reference station. In addition, the dependence of the systematic errors on meteorological conditions, e.g. precipitation intensity and wind speed, will be investigated to assess the feasibility of applying the WegenerNet datasets for the study of extreme precipitation events. The study can be regarded as a pre-processing research to further studies in hydro-meteorological applications, which require high-resolution precipitation datasets, such as satellite/radar-derived precipitation validation and hydrodynamic modelling.
Systematic review automation technologies
2014-01-01
Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional
Uncertainty assessment tool for climate change impact indicators
NASA Astrophysics Data System (ADS)
Otto, Juliane; Keup-Thiel, Elke; Jacob, Daniela; Rechid, Diana; Lückenkötter, Johannes; Juckes, Martin
2015-04-01
A major difficulty in the study of climate change impact indicators is dealing with the numerous sources of uncertainties of climate and non-climate data . Its assessment, however, is needed to communicate to users the degree of certainty of climate change impact indicators. This communication of uncertainty is an important component of the FP7 project "Climate Information Portal for Copernicus" (CLIPC). CLIPC is developing a portal to provide a central point of access for authoritative scientific information on climate change. In this project the Climate Service Center 2.0 is in charge of the development of a tool to assess the uncertainty of climate change impact indicators. The calculation of climate change impact indicators will include climate data from satellite and in-situ observations, climate models and re-analyses, and non-climate data. There is a lack of a systematic classification of uncertainties arising from the whole range of climate change impact indicators. We develop a framework that intends to clarify the potential sources of uncertainty of a given indicator and provides - if possible - solutions how to quantify the uncertainties. To structure the sources of uncertainties of climate change impact indicators, we first classify uncertainties along a 'cascade of uncertainty' (Reyer 2013). Our cascade consists of three levels which correspond to the CLIPC meta-classification of impact indicators: Tier-1 indicators are intended to give information on the climate system. Tier-2 indicators attempt to quantify the impacts of climate change on biophysical systems (i.e. flood risks). Tier-3 indicators primarily aim at providing information on the socio-economic systems affected by climate change. At each level, the potential sources of uncertainty of the input data sets and its processing will be discussed. Reference: Reyer, C. (2013): The cascade of uncertainty in modeling forest ecosystem responses to environmental change and the challenge of sustainable
Non-scalar uncertainty: Uncertainty in dynamic systems
NASA Technical Reports Server (NTRS)
Martinez, Salvador Gutierrez
1992-01-01
The following point is stated throughout the paper: dynamic systems are usually subject to uncertainty, be it the unavoidable quantic uncertainty when working with sufficiently small scales or when working in large scales uncertainty can be allowed by the researcher in order to simplify the problem, or it can be introduced by nonlinear interactions. Even though non-quantic uncertainty can generally be dealt with by using the ordinary probability formalisms, it can also be studied with the proposed non-scalar formalism. Thus, non-scalar uncertainty is a more general theoretical framework giving insight into the nature of uncertainty and providing a practical tool in those cases in which scalar uncertainty is not enough, such as when studying highly nonlinear dynamic systems. This paper's specific contribution is the general concept of non-scalar uncertainty and a first proposal for a methodology. Applications should be based upon this methodology. The advantage of this approach is to provide simpler mathematical models for prediction of the system states. Present conventional tools for dealing with uncertainty prove insufficient for an effective description of some dynamic systems. The main limitations are overcome abandoning ordinary scalar algebra in the real interval (0, 1) in favor of a tensor field with a much richer structure and generality. This approach gives insight into the interpretation of Quantum Mechanics and will have its most profound consequences in the fields of elementary particle physics and nonlinear dynamic systems. Concepts like 'interfering alternatives' and 'discrete states' have an elegant explanation in this framework in terms of properties of dynamic systems such as strange attractors and chaos. The tensor formalism proves especially useful to describe the mechanics of representing dynamic systems with models that are closer to reality and have relatively much simpler solutions. It was found to be wise to get an approximate solution to an
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. PMID:26695995
Visualizing uncertainty about the future.
Spiegelhalter, David; Pearson, Mike; Short, Ian
2011-09-01
We are all faced with uncertainty about the future, but we can get the measure of some uncertainties in terms of probabilities. Probabilities are notoriously difficult to communicate effectively to lay audiences, and in this review we examine current practice for communicating uncertainties visually, using examples drawn from sport, weather, climate, health, economics, and politics. Despite the burgeoning interest in infographics, there is limited experimental evidence on how different types of visualizations are processed and understood, although the effectiveness of some graphics clearly depends on the relative numeracy of an audience. Fortunately, it is increasingly easy to present data in the form of interactive visualizations and in multiple types of representation that can be adjusted to user needs and capabilities. Nonetheless, communicating deeper uncertainties resulting from incomplete or disputed knowledge--or from essential indeterminacy about the future--remains a challenge. PMID:21903802
Dynamical Realism and Uncertainty Propagation
NASA Astrophysics Data System (ADS)
Park, Inkwan
In recent years, Space Situational Awareness (SSA) has become increasingly important as the number of tracked Resident Space Objects (RSOs) continues their growth. One of the most significant technical discussions in SSA is how to propagate state uncertainty in a consistent way with the highly nonlinear dynamical environment. In order to keep pace with this situation, various methods have been proposed to propagate uncertainty accurately by capturing the nonlinearity of the dynamical system. We notice that all of the methods commonly focus on a way to describe the dynamical system as precisely as possible based on a mathematical perspective. This study proposes a new perspective based on understanding dynamics of the evolution of uncertainty itself. We expect that profound insights of the dynamical system could present the possibility to develop a new method for accurate uncertainty propagation. These approaches are naturally concluded in goals of the study. At first, we investigate the most dominant factors in the evolution of uncertainty to realize the dynamical system more rigorously. Second, we aim at developing the new method based on the first investigation enabling orbit uncertainty propagation efficiently while maintaining accuracy. We eliminate the short-period variations from the dynamical system, called a simplified dynamical system (SDS), to investigate the most dominant factors. In order to achieve this goal, the Lie transformation method is introduced since this transformation can define the solutions for each variation separately. From the first investigation, we conclude that the secular variations, including the long-period variations, are dominant for the propagation of uncertainty, i.e., short-period variations are negligible. Then, we develop the new method by combining the SDS and the higher-order nonlinear expansion method, called state transition tensors (STTs). The new method retains advantages of the SDS and the STTs and propagates
Wildfire Decision Making Under Uncertainty
NASA Astrophysics Data System (ADS)
Thompson, M.
2013-12-01
Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.
Uncertainty in measurements by counting
NASA Astrophysics Data System (ADS)
Bich, Walter; Pennecchi, Francesca
2012-02-01
Counting is at the base of many high-level measurements, such as, for example, frequency measurements. In some instances the measurand itself is a number of events, such as spontaneous decays in activity measurements, or objects, such as colonies of bacteria in microbiology. Countings also play a fundamental role in everyday life. In any case, a counting is a measurement. A measurement result, according to its present definition, as given in the 'International Vocabulary of Metrology—Basic and general concepts and associated terms (VIM)', must include a specification concerning the estimated uncertainty. As concerns measurements by counting, this specification is not easy to encompass in the well-known framework of the 'Guide to the Expression of Uncertainty in Measurement', known as GUM, in which there is no guidance on the topic. Furthermore, the issue of uncertainty in countings has received little or no attention in the literature, so that it is commonly accepted that this category of measurements constitutes an exception in which the concept of uncertainty is not applicable, or, alternatively, that results of measurements by counting have essentially no uncertainty. In this paper we propose a general model for measurements by counting which allows an uncertainty evaluation compliant with the general framework of the GUM.
Uncertainty of empirical correlation equations
NASA Astrophysics Data System (ADS)
Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.
2016-08-01
The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.
Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report)
Gray, A.; Lewandowski, A.; Wendelin, T.
2010-10-01
In 1997, an uncertainty analysis was conducted of the Video Scanning Hartmann Optical Tester (VSHOT). In 2010, we have completed a new analysis, based primarily on the geometric optics of the system, and it shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough mirror panel test. These help to guide the operator in proper setup, and help end-users to understand the data they are provided. We include both the systematic (bias) and random (precision) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors we considered in this study are: target tilt; target face to laser output distance; instrument vertical offset; laser output angle; distance between the tool and the test piece; camera calibration; and laser scanner. These contributing factors were applied to the calculated slope error, focal length, and test article tilt that are generated by the VSHOT data processing. Results show the estimated 2-sigma uncertainty in slope error for a parabolic trough line scan test to be +/-0.2 milliradians; uncertainty in the focal length is +/- 0.1 mm, and the uncertainty in test article tilt is +/- 0.04 milliradians.
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2012-04-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2013-03-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Preliminary assessment of the impact of conceptual model uncertainty on site performance
Gallegos, D.P.; Pohl, P.I.; Olague, N.E.; Knowlton, R.G.; Updegraff, C.D.
1990-10-01
The US Department of Energy is responsible for the design, construction, operation, and decommission of a site for the deep geologic disposal of high-level radioactive waste (HLW). This involves site characterization and the use of performance assessment to demonstrate compliance with regulations for HLW disposal from the US Environmental Protection Agency (EPA) and the US Nuclear Regulatory Commission. The EPA standard states that a performance assessment should consider the associated uncertainties involved in estimating cumulative release of radionuclides to the accessible environment. To date, the majority of the efforts in uncertainty analysis have been directed toward data and parameter uncertainty, whereas little effort has been made to treat model uncertainty. Model uncertainty includes conceptual model uncertainty, mathematical model uncertainty, and any uncertainties derived from implementing the mathematical model in a computer code. Currently there is no systematic approach that is designed to address the uncertainty in conceptual models. The purpose of this investigation is to take a first step at addressing conceptual model uncertainty. This will be accomplished by assessing the relative impact of alternative conceptual models on the integrated release of radionuclides to the accessible environment for an HLW repository site located in unsaturated, fractured tuff. 4 refs., 2 figs.
Structural model uncertainty in stochastic simulation
McKay, M.D.; Morrison, J.D.
1997-09-01
Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.
Reducing Uncertainties in Neutron-Induced Fission Cross Sections Using a Time Projection Chamber
NASA Astrophysics Data System (ADS)
Manning, Brett; Niffte Collaboration
2015-10-01
Neutron-induced fission cross sections for actinides have long been of great interest for nuclear energy and stockpile stewardship. Traditionally, measurements were performed using fission chambers which provided limited information about the detected fission events. For the case of 239Pu(n,f), sensitivity studies have shown a need for more precise measurements. Recently the Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) has developed the fission Time Projection Chamber (fissionTPC) to measure fission cross sections to better than 1% uncertainty by providing 3D tracking of fission fragments. The fissionTPC collected data to calculate the 239Pu(n,f) cross section at the Weapons Neutron Research facility at the Los Alamos Neutron Science Center during the 2014 run cycle. Preliminary analysis has been focused on studying particle identification and target and beam non-uniformities to reduce the uncertainty on the cross section. Additionally, the collaboration is investigating other systematic errors that could not be well studied with a traditional fission chamber. LA-UR-15-24906.
Xu, Junrui; Lambert, James H
2015-04-01
Access management, which systematically limits opportunities for egress and ingress of vehicles to highway lanes, is critical to protect trillions of dollars of current investment in transportation. This article addresses allocating resources for access management with incomplete and partially relevant data on crash rates, travel speeds, and other factors. While access management can be effective to avoid crashes, reduce travel times, and increase route capacities, the literature suggests a need for performance metrics to guide investments in resource allocation across large corridor networks and several time horizons. In this article, we describe a quantitative decision model to support an access management program via risk-cost-benefit analysis under data uncertainties from diverse sources of data and expertise. The approach quantifies potential benefits, including safety improvement and travel time savings, and costs of access management through functional relationships of input parameters including crash rates, corridor access point densities, and traffic volumes. Parameter uncertainties, which vary across locales and experts, are addressed via numerical interval analyses. This approach is demonstrated at several geographic scales across 7,000 kilometers of highways in a geographic region and several subregions. The demonstration prioritizes route segments that would benefit from risk management, including (i) additional data or elicitation, (ii) right-of-way purchases, (iii) restriction or closing of access points, (iv) new alignments, (v) developer proffers, and (vi) etc. The approach ought to be of wide interest to analysts, planners, policymakers, and stakeholders who rely on heterogeneous data and expertise for risk management. PMID:24924626
Modelling ecosystem service flows under uncertainty with stochiastic SPAN
Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.
2012-01-01
Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difﬁculty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of ﬂow pathways between ecosystems and human beneﬁciaries. Although the SPAN algorithms were originally deﬁned deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to ﬁll data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Uncertainty in Regional Air Quality Modeling
NASA Astrophysics Data System (ADS)
Digar, Antara
concentrations (oxides of nitrogen) have been used to adjust probabilistic estimates of pollutant sensitivities based on the performance of simulations in reliably reproducing ambient measurements. Various observational metrics have been explored for better scientific understanding of how sensitivity estimates vary with measurement constraints. Future work could extend these methods to incorporate additional modeling uncertainties and alternate observational metrics, and explore the responsiveness of future air quality to project trends in emissions and climate change.
MODIS On-orbit Calibration Uncertainty Assessment
NASA Technical Reports Server (NTRS)
Chiang, Vincent; Sun, Junqiang; Wu, Aisheng
2011-01-01
MODIS has 20 reflective solar bands (RSB) and 16 thermal emissive bands (TEB). Compared to its heritage sensors, MODIS was developed with very stringent calibration uncertainty requirements. As a result, MODIS was designed and built with a set of on-board calibrators (OBC), which allow key sensor performance parameters and on-orbit calibration coefficients to be monitored and updated. In terms of its calibration traceability, MODIS RSB calibration is reflectance based using an on-board solar diffuser (SD) and the TEB calibration is radiance based using an on-board blackbody (BB). In addition to on-orbit calibration coefficients derived from its OBC, calibration parameters determined from sensor pre-launch calibration and characterization are used in both the RSB and TEB calibration and retrieval algorithms. This paper provides a brief description of MODIS calibration methodologies and an in-depth analysis of its on-orbit calibration uncertainties. Also discussed in this paper are uncertainty contributions from individual components and differences due to Terra and Aqua MODIS instrument characteristics and on-orbit performance.
Uncertainty Analysis of non-point source pollution control facilities design techniques in Korea
NASA Astrophysics Data System (ADS)
Lee, J.; Okjeong, L.; Gyeong, C. B.; Park, M. W.; Kim, S.
2015-12-01
The design of non-point sources control facilities in Korea is divided largely by the stormwater capture ratio, the stormwater load capture ratio, and the pollutant reduction efficiency of the facility. The stormwater capture ratio is given by a design formula as a function of the water quality treatment capacity, the greater the capacity, the more the amount of stormwater intercepted by the facility. The stormwater load capture ratio is defined as the ratio of the load entering the facility of the total pollutant load generated in the target catchment, and is given as a design formula represented by a function of the stormwater capture ratio. In order to estimate the stormwater capture ratio and load capture ratio, a lot of quantitative analysis of hydrologic processes acted in pollutant emission is required, but these formulas have been applied without any verification. Since systematic monitoring programs were insufficient, verification of these formulas was fundamentally impossible. However, recently the Korean ministry of Environment has conducted an long-term systematic monitoring project, and thus the verification of the formulas became possible. In this presentation, the stormwater capture ratio and load capture ratio are re-estimated using actual TP data obtained from long-term monitoring program at Noksan industrial complex located in Busan, Korea. Through the re-estimated process, the uncertainty included in the design process that has been applied until now will be shown in a quantitative extent. In addition, each uncertainty included in the stormwater capture ratio estimation and in the stormwater load capture ratio estimation will be expressed to quantify the relative impact on the overall non-point pollutant control facilities design process. Finally, the SWMM-Matlab interlocking module for model parameters estimation will be introduced. Acknowledgement This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non
Uncertainty relations for angular momentum
NASA Astrophysics Data System (ADS)
Dammeier, Lars; Schwonnek, René; Werner, Reinhard F.
2015-09-01
In this work we study various notions of uncertainty for angular momentum in the spin-s representation of SU(2). We characterize the ‘uncertainty regions’ given by all vectors, whose components are specified by the variances of the three angular momentum components. A basic feature of this set is a lower bound for the sum of the three variances. We give a method for obtaining optimal lower bounds for uncertainty regions for general operator triples, and evaluate these for small s. Further lower bounds are derived by generalizing the technique by which Robertson obtained his state-dependent lower bound. These are optimal for large s, since they are saturated by states taken from the Holstein-Primakoff approximation. We show that, for all s, all variances are consistent with the so-called vector model, i.e., they can also be realized by a classical probability measure on a sphere of radius \\sqrt{s(s+1)}. Entropic uncertainty relations can be discussed similarly, but are minimized by different states than those minimizing the variances for small s. For large s the Maassen-Uffink bound becomes sharp and we explicitly describe the extremalizing states. Measurement uncertainty, as recently discussed by Busch, Lahti and Werner for position and momentum, is introduced and a generalized observable (POVM) which minimizes the worst case measurement uncertainty of all angular momentum components is explicitly determined, along with the minimal uncertainty. The output vectors for the optimal measurement all have the same length r(s), where r(s)/s\\to 1 as s\\to ∞ .
Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien
2015-04-01
Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a
Quantifying uncertainty in stable isotope mixing models
NASA Astrophysics Data System (ADS)
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-01
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ^{15}N and δ^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Cost uncertainty for different levels of technology maturity
DeMuth, S.F.; Franklin, A.L.
1996-08-07
It is difficult at best to apply a single methodology for estimating cost uncertainties related to technologies of differing maturity. While highly mature technologies may have significant performance and manufacturing cost data available, less well developed technologies may be defined in only conceptual terms. Regardless of the degree of technical maturity, often a cost estimate relating to application of the technology may be required to justify continued funding for development. Yet, a cost estimate without its associated uncertainty lacks the information required to assess the economic risk. For this reason, it is important for the developer to provide some type of uncertainty along with a cost estimate. This study demonstrates how different methodologies for estimating uncertainties can be applied to cost estimates for technologies of different maturities. For a less well developed technology an uncertainty analysis of the cost estimate can be based on a sensitivity analysis; whereas, an uncertainty analysis of the cost estimate for a well developed technology can be based on an error propagation technique from classical statistics. It was decided to demonstrate these uncertainty estimation techniques with (1) an investigation of the additional cost of remediation due to beyond baseline, nearly complete, waste heel retrieval from underground storage tanks (USTs) at Hanford; and (2) the cost related to the use of crystalline silico-titanate (CST) rather than the baseline CS100 ion exchange resin for cesium separation from UST waste at Hanford.
Reactions to Uncertainty and the Accuracy of Diagnostic Mammography
Yi, Joyce P.; Abraham, Linn A.; Miglioretti, Diana L.; Aiello, Erin J.; Gerrity, Martha S.; Reisch, Lisa; Berns, Eric A.; Sickles, Edward A.; Elmore, Joann G.
2007-01-01
Background Reactions to uncertainty in clinical medicine can affect decision making. Objective To assess the extent to which radiologists’ reactions to uncertainty influence diagnostic mammography interpretation. Design Cross-sectional responses to a mailed survey assessed reactions to uncertainty using a well-validated instrument. Responses were linked to radiologists’ diagnostic mammography interpretive performance obtained from three regional mammography registries. Participants One hundred thirty-two radiologists from New Hampshire, Colorado, and Washington. Measurement Mean scores and either standard errors or confidence intervals were used to assess physicians’ reactions to uncertainty. Multivariable logistic regression models were fit via generalized estimating equations to assess the impact of uncertainty on diagnostic mammography interpretive performance while adjusting for potential confounders. Results When examining radiologists’ interpretation of additional diagnostic mammograms (those after screening mammograms that detected abnormalities), a 5-point increase in the reactions to uncertainty score was associated with a 17% higher odds of having a positive mammogram given cancer was diagnosed during follow-up (sensitivity), a 6% lower odds of a negative mammogram given no cancer (specificity), a 4% lower odds (not significant) of a cancer diagnosis given a positive mammogram (positive predictive value [PPV]), and a 5% higher odds of having a positive mammogram (abnormal interpretation). Conclusion Mammograms interpreted by radiologists who have more discomfort with uncertainty have higher likelihood of being recalled. PMID:17356992
The Role of Uncertainty, Awareness, and Trust in Visual Analytics.
Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A
2016-01-01
Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making. PMID:26529704
Entropic uncertainty and measurement reversibility
NASA Astrophysics Data System (ADS)
Berta, Mario; Wehner, Stephanie; Wilde, Mark M.
2016-07-01
The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.
Uncertainty Quantification in Solidification Modelling
NASA Astrophysics Data System (ADS)
Fezi, K.; Krane, M. J. M.
2015-06-01
Numerical models have been used to simulate solidification processes, to gain insight into physical phenomena that cannot be observed experimentally. Often validation of such models has been done through comparison to a few or single experiments, in which agreement is dependent on both model and experimental uncertainty. As a first step to quantifying the uncertainty in the models, sensitivity and uncertainty analysis were performed on a simple steady state 1D solidification model of continuous casting of weld filler rod. This model includes conduction, advection, and release of latent heat was developed for use in uncertainty quantification in the calculation of the position of the liquidus and solidus and the solidification time. Using this model, a Smolyak sparse grid algorithm constructed a response surface that fit model outputs based on the range of uncertainty in the inputs to the model. The response surface was then used to determine the probability density functions (PDF's) of the model outputs and sensitivities of the inputs. This process was done for a linear fraction solid and temperature relationship, for which there is an analytical solution, and a Scheil relationship. Similar analysis was also performed on a transient 2D model of solidification in a rectangular domain.
Oscillator Stengths and Their Uncertainties
NASA Astrophysics Data System (ADS)
Wahlgren, G. M.
2010-11-01
The oscillator strength is a key parameter in the description of the line absorption coefficient. It can be determined through experiment, abinitio and semi-empirical calculations, and backward analysis of line profiles. Each method has its advantages, and the uncertainty attached to its determination can range from low to indeterminable. For analysis of line profiles or equivalent widths the uncertainty in the oscillator strength can rival or surpass the difference between the derived element abundance from classical LTE and non-LTE analyses. It is therefore important to understand the nature of oscillator strength uncertainties and to assess whether this uncertainty can be a factor in choosing to initiate a non-LTE analysis or in the interpretation of its results. Methods for the determination of the oscillator strength are presented, prioritizing experiments, along with commentary about the sources and impact of the uncertainties. The Sei spectrum is used to illustrate how gf-values can be constructed from published data on atomic lifetimes and line intensities.
Space Radiation Cancer Risks and Uncertainties for Mars Missions
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Badhwar, G. D.; Saganti, P. B.; Dicello, J. F.
2001-01-01
Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks.
Space radiation cancer risks and uncertainties for Mars missions.
Cucinotta, F A; Schimmerling, W; Wilson, J W; Peterson, L E; Badhwar, G D; Saganti, P B; Dicello, J F
2001-11-01
Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks. PMID:11604093
Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation
NASA Astrophysics Data System (ADS)
Leisenring, Marc; Moradkhani, Hamid
2012-10-01
SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load
Capturing the complexity of uncertainty language to maximise its use.
NASA Astrophysics Data System (ADS)
Juanchich, Marie; Sirota, Miroslav
2016-04-01
Uncertainty is often communicated verbally, using uncertainty phrases such as 'there is a small risk of earthquake', 'flooding is possible' or 'it is very likely the sea level will rise'. Prior research has only examined a limited number of properties of uncertainty phrases: mainly the probability conveyed (e.g., 'a small chance' convey a small probability whereas 'it is likely' convey a high probability). We propose a new analytical framework that captures more of the complexity of uncertainty phrases by studying their semantic, pragmatic and syntactic properties. Further, we argue that the complexity of uncertainty phrases is functional and can be leveraged to best describe uncertain outcomes and achieve the goals of speakers. We will present findings from a corpus study and an experiment where we assessed the following properties of uncertainty phrases: probability conveyed, subjectivity, valence, nature of the subject, grammatical category of the uncertainty quantifier and whether the quantifier elicits a positive or a negative framing. Natural language processing techniques applied to corpus data showed that people use a very large variety of uncertainty phrases representing different configurations of the properties of uncertainty phrases (e.g., phrases that convey different levels of subjectivity, phrases with different grammatical construction). In addition, the corpus analysis uncovered that uncertainty phrases commonly studied in psychology are not the most commonly used in real life. In the experiment we manipulated the amount of evidence indicating that a fact was true and whether the participant was required to prove the fact was true or that it was false. Participants produced a phrase to communicate the likelihood that the fact was true (e.g., 'it is not sure…', 'I am convinced that…'). The analyses of the uncertainty phrases produced showed that participants leveraged the properties of uncertainty phrases to reflect the strength of evidence but
Communicating spatial uncertainty to non-experts using R
NASA Astrophysics Data System (ADS)
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
Lack of data drives uncertainty in PCB health risk assessments.
Cogliano, Vincent James
2016-02-01
Health risk assessments generally involve many extrapolations: for example, from animals to humans or from high doses to lower doses. Health risk assessments for PCBs involve all the usual uncertainties, plus additional uncertainties due to the nature of PCBs as a dynamic, complex mixture. Environmental processes alter PCB mixtures after release into the environment, so that people are exposed to mixtures that might not resemble the mixtures where there are toxicity data. This paper discusses the evolution of understanding in assessments of the cancer and noncancer effects of PCBs. It identifies where a lack of data in the past contributed to significant uncertainty and where new data subsequently altered the prevailing understanding of the toxicity of PCB mixtures, either qualitatively or quantitatively. Finally, the paper identifies some uncertainties remaining for current PCB health assessments, particularly those that result from a lack of data on exposure through nursing or on effects from inhalation of PCBs. PMID:26347413
Uncertain LDA: Including Observation Uncertainties in Discriminative Transforms.
Saeidi, Rahim; Astudillo, Ramon Fernandez; Kolossa, Dorothea
2016-07-01
Linear discriminant analysis (LDA) is a powerful technique in pattern recognition to reduce the dimensionality of data vectors. It maximizes discriminability by retaining only those directions that minimize the ratio of within-class and between-class variance. In this paper, using the same principles as for conventional LDA, we propose to employ uncertainties of the noisy or distorted input data in order to estimate maximally discriminant directions. We demonstrate the efficiency of the proposed uncertain LDA on two applications using state-of-the-art techniques. First, we experiment with an automatic speech recognition task, in which the uncertainty of observations is imposed by real-world additive noise. Next, we examine a full-scale speaker recognition system, considering the utterance duration as the source of uncertainty in authenticating a speaker. The experimental results show that when employing an appropriate uncertainty estimation algorithm, uncertain LDA outperforms its conventional LDA counterpart. PMID:26415158
Analysis of uncertainties in turbine metal temperature predictions
NASA Technical Reports Server (NTRS)
Stepka, F. S.
1980-01-01
An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.
Systematic study of (α ,γ ) reactions for stable nickel isotopes
NASA Astrophysics Data System (ADS)
Simon, A.; Beard, M.; Spyrou, A.; Quinn, S. J.; Bucher, B.; Couder, M.; DeYoung, P. A.; Dombos, A. C.; Görres, J.; Kontos, A.; Long, A.; Moran, M. T.; Paul, N.; Pereira, J.; Robertson, D.; Smith, K.; Stech, E.; Talwar, R.; Tan, W. P.; Wiescher, M.
2015-08-01
A systematic measurement of the (α ,γ ) reaction for all the stable nickel isotopes has been performed using the γ -summing technique. For two of the isotopes, 60Ni and 61Ni, the α -capture cross sections have been experimentally measured for the first time. For 58,62,64Ni, the current measurement is in excellent agreement with earlier results found in the literature, and additionally extends the energy range of the measured cross sections up to 8.7 MeV. The data provided a tool for testing the cross section predictions of Hauser-Feshbach calculations. The experimental results were compared to the cross sections calculated with the talys 1.6 code and commonly used databases non-smoker and bruslib. For each of the investigated isotopes a combination of input parameter for talys was identified that best reproduces the experimental data, and recommended reaction rate has been calculated. Additionally, a set of inputs for Hauser-Feshbach calculations was given that, simultaneously for all the isotopes under consideration, reproduces the experimental data within the experimental uncertainties.
Davis, C B
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results.
Sub-Heisenberg phase uncertainties
NASA Astrophysics Data System (ADS)
Pezzé, Luca
2013-12-01
Phase shift estimation with uncertainty below the Heisenberg limit, ΔϕHL∝1/N¯T, where N¯T is the total average number of particles employed, is a mirage of linear quantum interferometry. Recently, Rivas and Luis, [New J. Phys.NJOPFM1367-263010.1088/1367-2630/14/9/093052 14, 093052 (2012)] proposed a scheme to achieve a phase uncertainty Δϕ∝1/N¯Tk, with k an arbitrary exponent. This sparked an intense debate in the literature which, ultimately, does not exclude the possibility to overcome ΔϕHL at specific phase values. Our numerical analysis of the Rivas and Luis proposal shows that sub-Heisenberg uncertainties are obtained only when the estimator is strongly biased. No violation of the Heisenberg limit is found after bias correction or when using a bias-free Bayesian analysis.
Climate negotiations under scientific uncertainty
Barrett, Scott; Dannenberg, Astrid
2012-01-01
How does uncertainty about “dangerous” climate change affect the prospects for international cooperation? Climate negotiations usually are depicted as a prisoners’ dilemma game; collectively, countries are better off reducing their emissions, but self-interest impels them to keep on emitting. We provide experimental evidence, grounded in an analytical framework, showing that the fear of crossing a dangerous threshold can turn climate negotiations into a coordination game, making collective action to avoid a dangerous threshold virtually assured. These results are robust to uncertainty about the impact of crossing a threshold, but uncertainty about the location of the threshold turns the game back into a prisoners’ dilemma, causing cooperation to collapse. Our research explains the paradox of why countries would agree to a collective goal, aimed at reducing the risk of catastrophe, but act as if they were blind to this risk. PMID:23045685
Uncertainties in hydrocarbon charge prediction
NASA Astrophysics Data System (ADS)
Visser, W.; Bell, A.
Computer simulations allow the prediction of hydrocarbon volumes, composition and charge timing in undrilled petroleum prospects. Whereas different models may give different hydrocarbon charge predictions, it has now become evident that a dominant cause of erroneous predictions is the poor quality of input data. The main culprit for prediction errors is the uncertainty in the initial hydrogen index (H/C) of the source rock. A 10% uncertainty in the H/C may lead to 50% error in the predicted hydrocarbon volumes, and associated gas-oil ratio. Similarly, uncertainties in the maximum burial temperature and the kinetics of hydrocarbon generation may lead to 20-50% error. Despite this, charge modelling can have great value for the ranking of prospects in the same area with comparable geological histories.
Significant predictors of patients' uncertainty in primary brain tumors.
Lin, Lin; Chien, Lung-Chang; Acquaye, Alvina A; Vera-Bolanos, Elizabeth; Gilbert, Mark R; Armstrong, Terri S
2015-05-01
Patients with primary brain tumors (PBT) face uncertainty related to prognosis, symptoms and treatment response and toxicity. Uncertainty is correlated to negative mood states and symptom severity and interference. This study identified predictors of uncertainty during different treatment stages (newly-diagnosed, on treatment, followed-up without active treatment). One hundred eighty six patients with PBT were accrued at various points in the illness trajectory. Data collection tools included: a clinical checklist/a demographic data sheet/the Mishel Uncertainty in Illness Scale-Brain Tumor Form. The structured additive regression model was used to identify significant demographic and clinical predictors of illness-related uncertainty. Participants were primarily white (80 %) males (53 %). They ranged in age from 19-80 (mean = 44.2 ± 12.6). Thirty-two of the 186 patients were newly-diagnosed, 64 were on treatment at the time of clinical visit with MRI evaluation, 21 were without MRI, and 69 were not on active treatment. Three subscales (ambiguity/inconsistency; unpredictability-disease prognoses; unpredictability-symptoms and other triggers) were different amongst the treatment groups (P < .01). However, patients' uncertainty during active treatment was as high as in newly-diagnosed period. Other than treatment stages, change of employment status due to the illness was the most significant predictor of illness-related uncertainty. The illness trajectory of PBT remains ambiguous, complex, and unpredictable, leading to a high incidence of uncertainty. There was variation in the subscales of uncertainty depending on treatment status. Although patients who are newly diagnosed reported the highest scores on most of the subscales, patients on treatment felt more uncertain about unpredictability of symptoms than other groups. Due to the complexity and impact of the disease, associated symptoms, and interference with functional status, comprehensive assessment of patients
The visualization of spatial uncertainty
Srivastava, R.M.
1994-12-31
Geostatistical conditions simulation is gaining acceptance as a numerical modeling tool in the petroleum industry. Unfortunately, many of the new users of conditional simulation work with only one outcome or ``realization`` and ignore the many other outcomes that could be produced by their conditional simulation tools. 3-D visualization tools allow them to create very realistic images of this single outcome as reality. There are many methods currently available for presenting the uncertainty information from a family of possible outcomes; most of these, however, use static displays and many present uncertainty in a format that is not intuitive. This paper explores the visualization of uncertainty through dynamic displays that exploit the intuitive link between uncertainty and change by presenting the use with a constantly evolving model. The key technical challenge to such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such animation is possible and to show that such dynamic displays can be an effective tool in risk analysis for the petroleum industry.
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Statistics, Uncertainty, and Transmitted Variation
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
Geographic Uncertainty in Environmental Security
NASA Astrophysics Data System (ADS)
Ahlquist, Jon
2008-06-01
This volume contains 17 papers presented at the NATO Advanced Research Workshop on Fuzziness and Uncertainty held in Kiev, Ukraine, 28 June to 1 July 2006. Eleven of the papers deal with fuzzy set concepts, while the other six (papers 5, 7, 13, 14, 15, and 16) are not fuzzy. A reader with no prior exposure to fuzzy set theory would benefit from having an introductory text at hand, but the papers are accessible to a wide audience. In general, the papers deal with broad issues of classification and uncertainty in geographic information.
Awe, uncertainty, and agency detection.
Valdesolo, Piercarlo; Graham, Jesse
2014-01-01
Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events. PMID:24247728
Systematics of Fission-Product Yields
A.C. Wahl
2002-05-01
Empirical equations representing systematics of fission-product yields have been derived from experimental data. The systematics give some insight into nuclear-structure effects on yields, and the equations allow estimation of yields from fission of any nuclide with atomic number Z{sub F} = 90 thru 98, mass number A{sub F} = 230 thru 252, and precursor excitation energy (projectile kinetic plus binding energies) PE = 0 thru {approx}200 MeV--the ranges of these quantities for the fissioning nuclei investigated. Calculations can be made with the computer program CYFP. Estimates of uncertainties in the yield estimates are given by equations, also in CYFP, and range from {approx} 15% for the highest yield values to several orders of magnitude for very small yield values. A summation method is used to calculate weighted average parameter values for fast-neutron ({approx} fission spectrum) induced fission reactions.
Thorough approach to measurement uncertainty analysis applied to immersed heat exchanger testing
Farrington, R B; Wells, C V
1986-04-01
This paper discusses the value of an uncertainty analysis, discusses how to determine measurement uncertainty, and then details the sources of error in instrument calibration, data acquisition, and data reduction for a particular experiment. Methods are discussed to determine both the systematic (or bias) error in an experiment as well as to determine the random (or precision) error in the experiment. The detailed analysis is applied to two sets of conditions in measuring the effectiveness of an immersed coil heat exchanger. It shows the value of such analysis as well as an approach to reduce overall measurement uncertainty and to improve the experiment. This paper outlines how to perform an uncertainty analysis and then provides a detailed example of how to apply the methods discussed in the paper. The authors hope this paper will encourage researchers and others to become more concerned with their measurement processes and to report measurement uncertainty with all of their test results.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
NASA Astrophysics Data System (ADS)
Zhang, Donghua; Madsen, Henrik; Ridler, Marc E.; Refsgaard, Jens C.; Jensen, Karsten H.
2015-12-01
The ensemble Kalman filter (EnKF) is a popular data assimilation (DA) technique that has been extensively used in environmental sciences for combining complementary information from model predictions and observations. One of the major challenges in EnKF applications is the description of model uncertainty. In most hydrological EnKF applications, an ad hoc model uncertainty is defined with the aim of avoiding a collapse of the filter. The present work provides a systematic assessment of model uncertainty in DA applications based on combinations of forcing, model parameters, and state uncertainties. This is tested in a case where groundwater hydraulic heads are assimilated into a distributed and integrated catchment-scale model of the Karup catchment in Denmark. A series of synthetic data assimilation experiments are carried out to analyse the impact of different model uncertainty assumptions on the feasibility and efficiency of the assimilation. The synthetic data used in the assimilation study makes it possible to diagnose model uncertainty assumptions statistically. Besides the model uncertainty, other factors such as observation error, observation locations, and ensemble size are also analysed with respect to performance and sensitivity. Results show that inappropriate definition of model uncertainty can greatly degrade the assimilation performance, and an appropriate combination of different model uncertainty sources is advised.
Carbon cycle uncertainty in the Alaskan Arctic
NASA Astrophysics Data System (ADS)
Fisher, J. B.; Sikka, M.; Oechel, W. C.; Huntzinger, D. N.; Melton, J. R.; Koven, C. D.; Ahlström, A.; Arain, A. M.; Baker, I.; Chen, J. M.; Ciais, P.; Davidson, C.; Dietze, M.; El-Masri, B.; Hayes, D.; Huntingford, C.; Jain, A.; Levy, P. E.; Lomas, M. R.; Poulter, B.; Price, D.; Sahoo, A. K.; Schaefer, K.; Tian, H.; Tomelleri, E.; Verbeeck, H.; Viovy, N.; Wania, R.; Zeng, N.; Miller, C. E.
2014-02-01
Climate change is leading to a disproportionately large warming in the high northern latitudes, but the magnitude and sign of the future carbon balance of the Arctic are highly uncertain. Using 40 terrestrial biosphere models for Alaska, we provide a baseline of terrestrial carbon cycle structural and parametric uncertainty, defined as the multi-model standard deviation (σ) against the mean (x\\bar) for each quantity. Mean annual uncertainty (σ/x\\bar) was largest for net ecosystem exchange (NEE) (-0.01± 0.19 kg C m-2 yr-1), then net primary production (NPP) (0.14 ± 0.33 kg C m-2 yr-1), autotrophic respiration (Ra) (0.09 ± 0.20 kg C m-2 yr-1), gross primary production (GPP) (0.22 ± 0.50 kg C m-2 yr-1), ecosystem respiration (Re) (0.23 ± 0.38 kg C m-2 yr-1), CH4 flux (2.52 ± 4.02 g CH4 m-2 yr-1), heterotrophic respiration (Rh) (0.14 ± 0.20 kg C m-2 yr-1), and soil carbon (14.0± 9.2 kg C m-2). The spatial patterns in regional carbon stocks and fluxes varied widely with some models showing NEE for Alaska as a strong carbon sink, others as a strong carbon source, while still others as carbon neutral. Additionally, a feedback (i.e., sensitivity) analysis was conducted of 20th century NEE to CO2 fertilization (β) and climate (γ), which showed that uncertainty in γ was 2x larger than that of β, with neither indicating that the Alaskan Arctic is shifting towards a certain net carbon sink or source. Finally, AmeriFlux data are used at two sites in the Alaskan Arctic to evaluate the regional patterns; observed seasonal NEE was captured within multi-model uncertainty. This assessment of carbon cycle uncertainties may be used as a baseline for the improvement of experimental and modeling activities, as well as a reference for future trajectories in carbon cycling with climate change in the Alaskan Arctic.
The financial value of uncertainty in terrestrial carbon
NASA Astrophysics Data System (ADS)
Smith, M. J.; Joppa, L.; Purves, D.; Vanderwel, M.; Lyutsarev, V.; Emmott, S.
2013-12-01
Estimates of biogeochemical properties should ideally be reported with uncertainty. But what are the consequences of that uncertainty for real world decisions, applications and future research? We recently published the world's first fully data-constrained global terrestrial carbon model - in which all parameters of a simple process-based carbon model have been inferred as probability distributions from empirical datasets on carbon stocks and fluxes. This estimates potential terrestrial carbon storage for every locality on earth as a probability distribution. Here we explore the implications of that uncertainty for Agriculture, Forestry, and Land Use change (AFOLU) projects aiming to generate money from carbon fixation and storage. We estimate that, at $20 per ton avoided CO2 emissions, further reducing uncertainty in the model parameters alone would generate thousands of additional dollars per hectare for individual projects, exceeding returns from crops and timber in many places, and of the order of billions of additional dollars for global carbon markets overall. This shows a very real financial incentive for performing research to further reduce uncertainty in terrestrial carbon estimates as well as a financial measure of the impact of performing additional research.
Hybrid processing of stochastic and subjective uncertainty data
Cooper, J.A.; Ferson, S.; Ginzburg, L.
1995-11-01
Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective data uncertainty, based on new ``hybrid number`` approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzy known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective constituents of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are worked, and graphical results are given.
TRITIUM UNCERTAINTY ANALYSIS FOR SURFACE WATER SAMPLES AT THE SAVANNAH RIVER SITE
Atkinson, R.
2012-07-31
Radiochemical analyses of surface water samples, in the framework of Environmental Monitoring, have associated uncertainties for the radioisotopic results reported. These uncertainty analyses pertain to the tritium results from surface water samples collected at five locations on the Savannah River near the U.S. Department of Energy's Savannah River Site (SRS). Uncertainties can result from the field-sampling routine, can be incurred during transport due to the physical properties of the sample, from equipment limitations, and from the measurement instrumentation used. The uncertainty reported by the SRS in their Annual Site Environmental Report currently considers only the counting uncertainty in the measurements, which is the standard reporting protocol for radioanalytical chemistry results. The focus of this work is to provide an overview of all uncertainty components associated with SRS tritium measurements, estimate the total uncertainty according to ISO 17025, and to propose additional experiments to verify some of the estimated uncertainties. The main uncertainty components discovered and investigated in this paper are tritium absorption or desorption in the sample container, HTO/H{sub 2}O isotopic effect during distillation, pipette volume, and tritium standard uncertainty. The goal is to quantify these uncertainties and to establish a combined uncertainty in order to increase the scientific depth of the SRS Annual Site Environmental Report.
The uncertainty of the atmospheric integrated water vapour estimated from GNSS observations
NASA Astrophysics Data System (ADS)
Ning, T.; Wang, J.; Elgered, G.; Dick, G.; Wickert, J.; Bradke, M.; Sommer, M.
2015-08-01
Within the Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) there is a need for an assessment of the uncertainty in the Integrated Water Vapour (IWV) in the atmosphere estimated from ground-based GNSS observations. All relevant error sources in GNSS-derived IWV is therefore essential to be investigated. We present two approaches, a statistical and a theoretical analysis, for the assessment of the uncertainty of the IWV. It will be implemented to the GNSS IWV data stream for GRUAN in order to obtain a specific uncertainty for each data point. In addition, specific recommendations are made to GRUAN on hardware, software, and data processing practices to minimize the IWV uncertainty. By combining the uncertainties associated with the input variables in the estimations of the IWV, we calculated the IWV uncertainties for several GRUAN sites with different weather conditions. The results show a similar relative importance of all uncertainty contributions where the uncertainties in the Zenith Total Delay (ZTD) dominate the error budget of the IWV contributing with over 75 % to the total IWV uncertainty. The impact of the uncertainty associated with the conversion factor between the IWV and the Zenith Wet Delay (ZWD) is proportional to the amount of water vapour and increases slightly for moist weather conditions. The GRUAN GNSS IWV uncertainty data will provide a quantified confidence to be used for the validation of other measurement techniques, taking the uncertainty into account from diurnal to decadal time scales.
Erythropoietin, uncertainty principle and cancer related anaemia
Clark, Otavio; Adams, Jared R; Bennett, Charles L; Djulbegovic, Benjamin
2002-01-01
Background This study was designed to evaluate if erythropoietin (EPO) is effective in the treatment of cancer related anemia, and if its effect remains unchanged when data are analyzed according to various clinical and methodological characteristics of the studies. We also wanted to demonstrate that cumulative meta-analysis (CMA) can be used to resolve uncertainty regarding clinical questions. Methods Systematic Review (SR) of the published literature on the role of EPO in cancer-related anemia. A cumulative meta-analysis (CMA) using a conservative approach was performed to determine the point in time when uncertainty about the effect of EPO on transfusion-related outcomes could be considered resolved. Participants: Patients included in randomized studies that compared EPO versus no therapy or placebo. Main outcome measures: Number of patients requiring transfusions. Results Nineteen trials were included. The pooled results indicated a significant effect of EPO in reducing the number of patients requiring transfusions [odds ratio (OR) = 0.41; 95%CI: 0.33 to 0.5; p < 0.00001;relative risk (RR) = 0.61; 95% CI: 0.54 to 0.68]. The results remain unchanged after the sensitivity analyses were performed according to the various clinical and methodological characteristics of the studies. The heterogeneity was less pronounced when OR was used instead of RR as the measure of the summary point estimate. Analysis according to OR was not heterogeneous, but the pooled RR was highly heterogeneous. A stepwise metaregression analysis did point to the possibility that treatment effect could have been exaggerated by inadequacy in allocation concealment and that larger treatment effects are seen at hb level > 11.5 g/dl. We identified 1995 as the point in time when a statistically significant effect of EPO was demonstrated and after which we considered that uncertainty about EPO efficacy was resolved. Conclusion EPO is effective in the treatment of anemia in cancer patients. This
PIV uncertainty quantification by image matching
NASA Astrophysics Data System (ADS)
Sciacchitano, Andrea; Wieneke, Bernhard; Scarano, Fulvio
2013-04-01
A novel method is presented to quantify the uncertainty of PIV data. The approach is a posteriori, i.e. the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images. The principle of the method relies on the concept of super-resolution: the image pair is matched according to the cross-correlation analysis and the residual distance between matched particle image pairs (particle disparity vector) due to incomplete match between the two exposures is measured. The ensemble of disparity vectors within the interrogation window is analyzed statistically. The dispersion of the disparity vector returns the estimate of the random error, whereas the mean value of the disparity indicates the occurrence of a systematic error. The validity of the working principle is first demonstrated via Monte Carlo simulations. Two different interrogation algorithms are considered, namely the cross-correlation with discrete window offset and the multi-pass with window deformation. In the simulated recordings, the effects of particle image displacement, its gradient, out-of-plane motion, seeding density and particle image diameter are considered. In all cases good agreement is retrieved, indicating that the error estimator is able to follow the trend of the actual error with satisfactory precision. Experiments where time-resolved PIV data are available are used to prove the concept under realistic measurement conditions. In this case the ‘exact’ velocity field is unknown; however a high accuracy estimate is obtained with an advanced interrogation algorithm that exploits the redundant information of highly temporally oversampled data (pyramid correlation, Sciacchitano et al (2012 Exp. Fluids 53 1087-105)). The image-matching estimator returns the instantaneous distribution of the estimated velocity measurement error. The spatial distribution compares very well with that of the actual error with maxima in the
Uncertainty Analysis of Model Coupling
NASA Astrophysics Data System (ADS)
Held, H.; Knopf, B.; Schneider von Deimling, T.; Schellnhuber, H.-J.
The Earth System is a highly complex system that is often modelled by coupling sev- eral nonlinear submodules. For predicting the climate with these models, the following uncertainties play an essential role: parameter uncertainty, uncertainty in initial con- ditions or model uncertainty. Here we will address uncertainty in initial conditions as well as model uncertainty. As the process of coupling is an important part of model- ing, the main aspect of this work is the investigation of uncertainties that are due to the coupling process. For this study we use conceptual models that, compared to GCMs, have the advantage that the model itself as well as the output can be treated in a mathematically elabo- rated way. As the time for running the model is much shorter, the investigation is also possible for a longer period, e.g. for paleo runs. In consideration of these facts it is feasible to analyse the whole phase space of the model. The process of coupling is investigated by using different methods of examining low order coupled atmosphere-ocean systems. In the dynamical approach a fully coupled system of the two submodules can be compared to a system where one submodule forces the other. For a particular atmosphere-ocean system, based on the Lorenz model for the atmosphere, there can be shown significant differences in the predictability of a forced system depending whether the subsystems are coupled in a linear or a non- linear way. In [1] it is shown that in the linear case the forcing cannot represent the coupling, but in the nonlinear case, that we investigated in our study, the variability and the statistics of the coupled system can be reproduced by the forcing. Another approach to analyse the coupling is to carry out a bifurcation analysis. Here the bifurcation diagram of a single atmosphere system is compared to that of a cou- pled atmosphere-ocean system. Again it can be seen from the different behaviour of the coupled and the uncoupled system, that the
Uncertainty Analysis in Large Area Aboveground Biomass Mapping
NASA Astrophysics Data System (ADS)
Baccini, A.; Carvalho, L.; Dubayah, R.; Goetz, S. J.; Friedl, M. A.
2011-12-01
Satellite and aircraft-based remote sensing observations are being more frequently used to generate spatially explicit estimates of aboveground carbon stock of forest ecosystems. Because deforestation and forest degradation account for circa 10% of anthropogenic carbon emissions to the atmosphere, policy mechanisms are increasingly recognized as a low-cost mitigation option to reduce carbon emission. They are, however, contingent upon the capacity to accurately measures carbon stored in the forests. Here we examine the sources of uncertainty and error propagation in generating maps of aboveground biomass. We focus on characterizing uncertainties associated with maps at the pixel and spatially aggregated national scales. We pursue three strategies to describe the error and uncertainty properties of aboveground biomass maps, including: (1) model-based assessment using confidence intervals derived from linear regression methods; (2) data-mining algorithms such as regression trees and ensembles of these; (3) empirical assessments using independently collected data sets.. The latter effort explores error propagation using field data acquired within satellite-based lidar (GLAS) acquisitions versus alternative in situ methods that rely upon field measurements that have not been systematically collected for this purpose (e.g. from forest inventory data sets). A key goal of our effort is to provide multi-level characterizations that provide both pixel and biome-level estimates of uncertainties at different scales.
Differentiating intolerance of uncertainty from three related but distinct constructs.
Rosen, Natalie O; Ivanova, Elena; Knäuper, Bärbel
2014-01-01
Individual differences in uncertainty have been associated with heightened anxiety, stress and approach-oriented coping. Intolerance of uncertainty (IU) is a trait characteristic that arises from negative beliefs about uncertainty and its consequences. Researchers have established the central role of IU in the development of problematic worry and maladaptive coping, highlighting the importance of this construct to anxiety disorders. However, there is a need to improve our understanding of the phenomenology of IU. The goal of this paper was to present hypotheses regarding the similarities and differences between IU and three related constructs--intolerance of ambiguity, uncertainty orientation, and need for cognitive closure--and to call for future empirical studies to substantiate these hypotheses. To assist with achieving this goal, we conducted a systematic review of the literature, which also served to identify current gaps in knowledge. This paper differentiates these constructs by outlining each definition and general approaches to assessment, reviewing the existing empirical relations, and proposing theoretical similarities and distinctions. Findings may assist researchers in selecting the appropriate construct to address their research questions. Future research directions for the application of these constructs, particularly within the field of clinical and health psychology, are discussed. PMID:23849047
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Intrinsic uncertainty on the nature of dark energy
NASA Astrophysics Data System (ADS)
Valkenburg, Wessel; Kunz, Martin; Marra, Valerio
2013-12-01
We argue that there is an intrinsic noise on measurements of the equation of state parameter w = p/ρ from large-scale structure around us. The presence of the large-scale structure leads to an ambiguity in the definition of the background universe and thus there is a maximal precision with which we can determine the equation of state of dark energy. To study the uncertainty due to local structure, we model density perturbations stemming from a standard inflationary power spectrum by means of the exact Lemaître-Tolman-Bondi solution of Einstein’s equation, and show that the usual distribution of matter inhomogeneities in a ΛCDM cosmology causes a variation of w - as inferred from distance measures - of several percent. As we observe only one universe, or equivalently because of the cosmic variance, this uncertainty is systematic in nature.
Ng, K H; Peh, W C
2010-05-01
Evidence-based medicine (EBM) aims to combine the best available scientific evidence with clinical experience and individual judgment of patient needs. In the hierarchy of scientific evidence, systematic reviews (along with meta-analyses) occupy the highest levels in terms of the quality of evidence. A systematic review is the process of searching, selecting, appraising, synthesising and reporting clinical evidence on a particular question or topic. It is currently considered the best, least biased and most rational way to organise, gather, evaluate and integrate scientific evidence from the rapidly-changing medical and healthcare literature. Systematic reviews could be used to present current concepts or serve as review articles and replace the traditional expert opinion or narrative review. This article explains the structure and content of a systematic review. PMID:20593139
Uncertainty Quantification for Nuclear Currents: A Bayesian χ-EFT view of the Triton and β- Decay
NASA Astrophysics Data System (ADS)
Wendt, Kyle
2014-09-01
Chiral Effective Field Theory (χ-EFT) provides a framework for the generation and systematic improvement of model independent inter-nucleon interaction Hamiltonians and nuclear current operators. Within χ-EFT, short and mid distance physics is encoded through a gradient expansion and multiple pion exchange parameterized by a set of low energy constants (LECs). The LECs are often constrained via non-linear least squares using nuclear bound state and scattering observables. This has produced reasonable low-energy descriptions in the past, but has been plagued by LECs that are unnaturally large. Additional issues manifest in medium mass nuclei where the χ-EFT Hamiltonians fail to adequately describe saturation properties. It has been suggested that Bayesian approaches may remedy the unnaturally large LECs using carefully selected priors. Other analyses have suggested that the inclusion and feedback of nuclear currents into the constraints of the LECs may improve saturation properties. We combine these approaches using Markov chain Monte Carlo (MCMC) to study and quantify uncertainties in the Triton and the χ-EFT axial-vector current, with the aim of providing a foundation for quantifying χ-EFT uncertainties for weak processes in nuclei. Chiral Effective Field Theory (χ-EFT) provides a framework for the generation and systematic improvement of model independent inter-nucleon interaction Hamiltonians and nuclear current operators. Within χ-EFT, short and mid distance physics is encoded through a gradient expansion and multiple pion exchange parameterized by a set of low energy constants (LECs). The LECs are often constrained via non-linear least squares using nuclear bound state and scattering observables. This has produced reasonable low-energy descriptions in the past, but has been plagued by LECs that are unnaturally large. Additional issues manifest in medium mass nuclei where the χ-EFT Hamiltonians fail to adequately describe saturation properties. It has
Mittal, Chiraag; Griskevicius, Vladas
2014-10-01
Past research found that environmental uncertainty leads people to behave differently depending on their childhood environment. For example, economic uncertainty leads people from poor childhoods to become more impulsive while leading people from wealthy childhoods to become less impulsive. Drawing on life history theory, we examine the psychological mechanism driving such diverging responses to uncertainty. Five experiments show that uncertainty alters people's sense of control over the environment. Exposure to uncertainty led people from poorer childhoods to have a significantly lower sense of control than those from wealthier childhoods. In addition, perceptions of control statistically mediated the effect of uncertainty on impulsive behavior. These studies contribute by demonstrating that sense of control is a psychological driver of behaviors associated with fast and slow life history strategies. We discuss the implications of this for theory and future research, including that environmental uncertainty might lead people who grew up poor to quit challenging tasks sooner than people who grew up wealthy. PMID:25133717
Vergnes, Jean-Noel; Marchal-Sixou, Christine; Nabet, Cathy; Maret, Delphine; Hamel, Olivier
2010-12-01
Since its introduction by the Nuremberg Code and the Declaration of Helsinki, the place held by ethics in biomedical research has been continuously increasing in importance. The past 30 years have also seen exponential growth in the number of biomedical articles published. A systematic review of the literature is the scientific way of synthesising a plethora of information, by exhaustively searching out and objectively analysing the studies dealing with a given issue. However, the question of ethics in systematic reviews is rarely touched upon. This could lead to some drawbacks, as systematic reviews may contain studies with ethical insufficiencies, may be a possible way to publish unethical research and may also be prone to conflict of interest. Finally, informed consent given for an original study is not necessarily still valid at the systematic review level. There is no doubt that routine ethical assessment in systematic reviews would help to improve the ethical and methodological quality of studies in general. However, ethical issues change so much with time and location, and are so broad in scope and in context that it appears illusory to search for a universal, internationally accepted standard for ethical assessment in systematic reviews. Some simple suggestions could nevertheless be drawn from the present reflection and are discussed in the paper. PMID:20952493
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Neutrino Scattering Uncertainties and their Role in Long Baseline Oscillation Experiments
D.A. Harris; G. Blazey; Arie Bodek; D. Boehnlein; S. Boyd; William Brooks; Antje Bruell; Howard S. Budd; R. Burnstein; D. Casper; A. Chakravorty; Michael Christy; Jesse Chvojka; M.A.C. Cummings; P. deBarbaro; D. Drakoulakos; J. Dunmore; Rolf Ent; Hugh Gallagher; David Gaskell; Ronald Gilman; Charles Glashausser; Wendy Hinton; Xiaodong Jiang; T. Kafka; O. Kamaev; Cynthia Keppel; M. Kostin; Sergey Kulagin; Gerfried Kumbartzki; Steven Manly; W.A. Mann; Kevin Mcfarland-porter; Wolodymyr Melnitchouk; Jorge Morfin; D. Naples; John Nelson; Gabriel Niculescu; Maria-ioana Niculescu; W. Oliver; Michael Paolone; Emmanuel Paschos; A. Pla-Dalmau; Ronald Ransome; C. Regis; P. Rubinov; V. Rykalin; Willis Sakumoto; P. Shanahan; N. Solomey; P. Spentzouris; P. Stamoulis; G. Tzanakos; Stephen Wood; F.X. Yumiceva; B. Ziemer; M. Zois
2004-10-01
The field of oscillation physics is about to make an enormous leap forward in statistical precision: first through the MINOS experiment in the coming year, and later through the NOvA and T2K experiments. Because of the relatively poor understanding of neutrino interactions in the energy ranges of these experiments, there are systematics that can arise in interpreting far detector data that can be as large as or even larger than the expected statistical uncertainties. We describe how these systematic errors arise, and how specific measurements in a dedicated neutrino scattering experiment like MINERvA can reduce the cross section systematic errors to well below the statistical errors.
Uncertainty of calculation results in vehicle collision analysis.
Wach, Wojciech; Unarski, Jan
2007-04-11
In the analysis of road accidents two types of calculation result uncertainty can be distinguished: modelling uncertainty and uncertainty in calculation results [R.M. Brach, M. Brach, Vehicle Accident Analysis & Reconstruction Methods, SAE International Publisher, Warrendale, 2005]. The problem becomes very important first of all when minor modifications of input parameters or application of different models of the phenomenon lead to a fundamentally different answer to the question posed by the court. The aim of the paper was to prove the necessity of including the problem of uncertainty in calculations related to vehicle collision mechanics and to justify the application of different error analysis methods recommendable in vehicle collision reconstruction. The data file from crash test No. 7 [H. Burg, M. Lindenmann, Unfallversuche, Verlag Information Ambs, Kippenheim, 1982] was used, the selection restricted to the range typical of average police records of collision place. Collision speeds were calculated using two methods: reconstruction and simulation. The analysis of uncertainty was carried out. Maximum and mean square uncertainty were calculated by means of total differential of relevant forms. Since the reconstruction resulted in very broad error intervals of uniform distribution, additional calculations were performed by the Monte Carlo method using algorithm described in [W. Wach, J. Unarski, Determination of vehicle velocities and collision location by means of Monte Carlo simulation method, Special Publication Accident Reconstruction SP-1999, SAE Paper No. 2006-01-0907, 2006]. PMID:16884874
Prediction uncertainty and optimal experimental design for learning dynamical systems
NASA Astrophysics Data System (ADS)
Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Structural uncertainty in watershed phosphorus modeling: Toward a stochastic framework
NASA Astrophysics Data System (ADS)
Chen, Lei; Gong, Yongwei; Shen, Zhenyao
2016-06-01
Structural uncertainty is an important source of model predictive errors, but few studies have been conducted on the error-transitivity from model structure to nonpoint source (NPS) prediction. In this study, we focused on the structural uncertainty caused by the algorithms and equations that are used to describe the phosphorus (P) cycle at the watershed scale. The sensitivity of simulated P to each algorithm/equation was quantified using the Soil and Water Assessment Tool (SWAT) in the Three Gorges Reservoir Area, China. The results indicated that the ratios of C:N and P:N for humic materials, as well as the algorithm of fertilization and P leaching contributed the largest output uncertainties. In comparison, the initiation of inorganic P in the soil layer and the transformation algorithm between P pools are less sensitive for the NPS-P predictions. In addition, the coefficient of variation values were quantified as 0.028-0.086, indicating that the structure-induced uncertainty is minor compared to NPS-P prediction uncertainty caused by the model input and parameters. Using the stochastic framework, the cumulative probability of simulated NPS-P data provided a trade-off between expenditure burden and desired risk. In this sense, this paper provides valuable information for the control of model structural uncertainty, and can be extrapolated to other model-based studies.
Launch Risk Acceptability Considering Uncertainty in Risk Estimates
NASA Astrophysics Data System (ADS)
Collins, J. D.; Carbon, S. L.
2010-09-01
Quantification of launch risk is difficult and uncertain due to the assumptions made in the modeling process and the difficulty in developing the supporting data. This means that estimates of the risks are uncertain and the decision maker must decide on the acceptability of the launch under uncertainty. This paper describes the process to quantify the uncertainty and, in the process, describes the separate roles of aleatory and epistemic uncertainty in obtaining the point estimate of the casualty expectation and, ultimately, the distribution of the uncertainty in the computed casualty expectation. Tables are included of the significant sources and the nature of the contributing uncertainties. In addition, general procedures and an example are also included to describe the computational procedure. The second part of the paper discusses how the quantified uncertainty should be applied to the decision-making process. This discussion describes the procedure proposed and adopted by the Risk Committee of the Range Commander’s Council Range Safety Group which will be published in RCC 321-10 [1].
Uncertainty quantification for systems of conservation laws
Poette, Gael Despres, Bruno Lucor, Didier
2009-04-20
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases.
Video Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis: Preprint
Lewandowski, A.; Gray, A.
2010-10-01
This purely analytical work is based primarily on the geometric optics of the system and shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough test. In this paper, we include both the random (precision) and systematic (bias) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors that we considered in this study are target tilt, target face to laser output distance, instrument vertical offset, scanner tilt, distance between the tool and the test piece, camera calibration, and scanner/calibration.
On scale-dependent cosmic shear systematic effects
NASA Astrophysics Data System (ADS)
Kitching, T. D.; Taylor, A. N.; Cropper, M.; Hoekstra, H.; Hood, R. K. E.; Massey, R.; Niemi, S.
2016-01-01
In this paper, we investigate the impact that realistic scale-dependent systematic effects may have on cosmic shear tomography. We model spatially varying residual galaxy ellipticity and galaxy size variations in weak lensing measurements and propagate these through to predicted changes in the uncertainty and bias of cosmological parameters. We show that the survey strategy - whether it is regular or randomized - is an important factor in determining the impact of a systematic effect: a purely randomized survey strategy produces the smallest biases, at the expense of larger parameter uncertainties, and a very regularized survey strategy produces large biases, but unaffected uncertainties. However, by removing, or modelling, the affected scales (ℓ-modes) in the regular cases the biases are reduced to negligible levels. We find that the integral of the systematic power spectrum is not a good metric for dark energy performance, and we advocate that systematic effects should be modelled accurately in real space, where they enter the measurement process, and their effect subsequently propagated into power spectrum contributions.
Uncertainty in mapping urban air quality using crowdsourcing techniques
NASA Astrophysics Data System (ADS)
Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena
2016-04-01
Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty
Application of Evidence Theory to Quantify Uncertainty in Turbulent Flow Simulations
NASA Astrophysics Data System (ADS)
Poroseva, S. V.; Hussaini, M. Y.; Woodruff, S. L.
2004-11-01
When turbulence models are used to simulate flows for which experimental results do not exist, there is as yet no reliable and systematic procedure for choosing a proper model or for quantifying the uncertainty of the results. In fact, there is no standard procedure to quantify model uncertainty even where experimental data are available. Hence, evaluation of model performance even in controlled conditions is very subjective. A systematic approach based on statistical theories for addressing these issues is highly desirable. Of the theories capable of treating uncertainty (aleatory, due to the stochastic nature of phenomena, and epistemic, due to lack of knowledge), evidence theory appears to be most appropriate, and is used here to develop a systematic approach to model uncertainty. This approach allows one to fuse results obtained with different turbulence models. During the validation phase, the uncertainty in the results from each model is quantified. This information is exploited in the prediction methodology for flows (for which no experimental results exist for validation) to enhance the overall credibility of the prediction results. A numerical value for the degree of support for predictions is provided. Such information is unavailable in traditional approaches to turbulence flow predictions. The present approach is applied to the problem of subsonic flow around the RAE 2822 airfoil using different two-equation turbulence models.
NASA Astrophysics Data System (ADS)
Swindle, R.; Gal, R. R.; La Barbera, F.; de Carvalho, R. R.
2011-10-01
We present robust statistical estimates of the accuracy of early-type galaxy stellar masses derived from spectral energy distribution (SED) fitting as functions of various empirical and theoretical assumptions. Using large samples consisting of ~40,000 galaxies from the Sloan Digital Sky Survey (SDSS; ugriz), of which ~5000 are also in the UKIRT Infrared Deep Sky Survey (YJHK), with spectroscopic redshifts in the range 0.05 <= z <= 0.095, we test the reliability of some commonly used stellar population models and extinction laws for computing stellar masses. Spectroscopic ages (t), metallicities (Z), and extinctions (AV ) are also computed from fits to SDSS spectra using various population models. These external constraints are used in additional tests to estimate the systematic errors in the stellar masses derived from SED fitting, where t, Z, and AV are typically left as free parameters. We find reasonable agreement in mass estimates among stellar population models, with variation of the initial mass function and extinction law yielding systematic biases on the mass of nearly a factor of two, in agreement with other studies. Removing the near-infrared bands changes the statistical bias in mass by only ~0.06 dex, adding uncertainties of ~0.1 dex at the 95% CL. In contrast, we find that removing an ultraviolet band is more critical, introducing 2σ uncertainties of ~0.15 dex. Finally, we find that the stellar masses are less affected by the absence of metallicity and/or dust extinction knowledge. However, there is a definite systematic offset in the mass estimate when the stellar population age is unknown, up to a factor of 2.5 for very old (12 Gyr) stellar populations. We present the stellar masses for our sample, corrected for the measured systematic biases due to photometrically determined ages, finding that age errors produce lower stellar masses by ~0.15 dex, with errors of ~0.02 dex at the 95% CL for the median stellar age subsample.
Mackenzie, S G; Leinonen, I; Ferguson, N; Kyriazakis, I
2015-06-01
The objective of the study was to develop a life cycle assessment (LCA) for pig farming systems that would account for uncertainty and variability in input data and allow systematic environmental impact comparisons between production systems. The environmental impacts of commercial pig production for 2 regions in Canada (Eastern and Western) were compared using a cradle-to-farm gate LCA. These systems had important contrasting characteristics such as typical feed ingredients used, herd performance, and expected emission factors from manure management. The study used detailed production data supplied by the industry and incorporated uncertainty/variation in all major aspects of the system including life cycle inventory data for feed ingredients, animal performance, energy inputs, and emission factors. The impacts were defined using 5 metrics-global warming potential, acidification potential, eutrophication potential (EP), abiotic resource use, and nonrenewable energy use-and were expressed per kilogram carcass weight at farm gate. Eutrophication potential was further separated into marine EP (MEP) and freshwater EP (FEP). Uncertainties in the model inputs were separated into 2 types: uncertainty in the data used to describe the system (α uncertainties) and uncertainty in impact calculations or background data that affects all systems equally (β uncertainties). The impacts of pig production in the 2 regions were systematically compared based on the differences in the systems (α uncertainties). The method of ascribing uncertainty influenced the outcomes. In eastern systems, EP, MEP, and FEP were lower (P < 0.05) when assuming that all uncertainty in the emission factors for leaching from manure application was β. This was mainly due to increased EP resulting from field emissions for typical ingredients in western diets. When uncertainty in these emission factors was assumed to be α, only FEP was lower in eastern systems (P < 0.05). The environmental impacts for
Siebert, Uwe; Rochau, Ursula; Claxton, Karl
2013-01-01
Decision analysis (DA) and value-of-information (VOI) analysis provide a systematic, quantitative methodological framework that explicitly considers the uncertainty surrounding the currently available evidence to guide healthcare decisions. In medical decision making under uncertainty, there are two fundamental questions: 1) What decision should be made now given the best available evidence (and its uncertainty)?; 2) Subsequent to the current decision and given the magnitude of the remaining uncertainty, should we gather further evidence (i.e., perform additional studies), and if yes, which studies should be undertaken (e.g., efficacy, side effects, quality of life, costs), and what sample sizes are needed? Using the currently best available evidence, VoI analysis focuses on the likelihood of making a wrong decision if the new intervention is adopted. The value of performing further studies and gathering additional evidence is based on the extent to which the additional information will reduce this uncertainty. A quantitative framework allows for the valuation of the additional information that is generated by further research, and considers the decision maker's objectives and resource constraints. Claxton et al. summarise: "Value of information analysis can be used to inform a range of policy questions including whether a new technology should be approved based on existing evidence, whether it should be approved but additional research conducted or whether approval should be withheld until the additional evidence becomes available." [Claxton K. Value of information entry in Encyclopaedia of Health Economics, Elsevier, forthcoming 2014.] The purpose of this tutorial is to introduce the framework of systematic VoI analysis to guide further research. In our tutorial article, we explain the theoretical foundations and practical methods of decision analysis and value-of-information analysis. To illustrate, we use a simple case example of a foot ulcer (e.g., with
NASA Astrophysics Data System (ADS)
Harmel, D.
2014-12-01
In spite of pleas for uncertainty analysis - such as Beven's (2006) "Should it not be required that every paper in both field and modeling studies attempt to evaluate the uncertainty in the results?" - the uncertainty associated with hydrology and water quality data is rarely quantified and rarely considered in model evaluation. This oversight, justified in the past by mainly tenuous philosophical concerns, diminishes the value of measured data and ignores the environmental and socio-economic benefits of improved decisions and policies based on data with estimated uncertainty. This oversight extends to researchers, who typically fail to estimate uncertainty in measured discharge and water quality data because of additional effort required, lack of adequate scientific understanding on the subject, and fear of negative perception if data with "high" uncertainty are reported; however, the benefits are certain. Furthermore, researchers have a responsibility for scientific integrity in reporting what is known and what is unknown, including the quality of measured data. In response we produced an uncertainty estimation framework and the first cumulative uncertainty estimates for measured water quality data (Harmel et al., 2006). From that framework, DUET-H/WQ was developed (Harmel et al., 2009). Application to several real-world data sets indicated that substantial uncertainty can be contributed by each data collection procedural category and that uncertainties typically occur in order discharge < sediment < dissolved N and P < total N and P. Similarly, modelers address certain aspects of model uncertainty but ignore others, such as the impact of uncertainty in discharge and water quality data. Thus, we developed methods to incorporate prediction uncertainty as well as calibration/validation data uncertainty into model goodness-of-fit evaluation (Harmel and Smith, 2007; Harmel et al., 2010). These enhance model evaluation by: appropriately sharing burden with "data
Adjoint-Based Uncertainty Quantification with MCNP
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
Uncertainties in Interpolated Spectral Data
Gardner, James L.
2003-01-01
Interpolation is often used to improve the accuracy of integrals over spectral data convolved with various response functions or power distributions. Formulae are developed for propagation of uncertainties through the interpolation process, specifically for Lagrangian interpolation increasing a regular data set by factors of 5 and 2, and for cubic-spline interpolation. The interpolated data are correlated; these correlations must be considered when combining the interpolated values, as in integration. Examples are given using a common spectral integral in photometry. Correlation coefficients are developed for Lagrangian interpolation where the input data are uncorrelated. It is demonstrated that in practical cases, uncertainties for the integral formed using interpolated data can be reliably estimated using the original data.
Credible Software and Simulation Uncertainty
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.; Nixon, David (Technical Monitor)
1998-01-01
The utility of software primarily depends on its reliability and performance; whereas, its significance depends solely on its credibility for intended use. The credibility of simulations confirms the credibility of software. The level of veracity and the level of validity of simulations determine the degree of credibility of simulations. The process of assessing this credibility in fields such as computational mechanics (CM) differs from that followed by the Defense Modeling and Simulation Office in operations research. Verification and validation (V&V) of CM simulations is not the same as V&V of CM software. Uncertainty is the measure of simulation credibility. Designers who use software are concerned with management of simulation uncertainty. Terminology and concepts are presented with a few examples from computational fluid dynamics.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Confronting dynamics and uncertainty in optimal decision making for conservation
Williams, Byron K.; Johnson, Fred A.
2013-01-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a
Confronting dynamics and uncertainty in optimal decision making for conservation
NASA Astrophysics Data System (ADS)
Williams, Byron K.; Johnson, Fred A.
2013-06-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
A Qualitative Approach to Uncertainty
NASA Astrophysics Data System (ADS)
Ghosh, Sujata; Velázquez-Quesada, Fernando R.
We focus on modelling dual epistemic attitudes (belief-disbelief, knowledge-ignorance, like-dislike) of an agent. This provides an interesting way to express different levels of uncertainties explicitly in the logical language. After introducing a dual modal framework, we discuss the different possibilities of an agent's attitude towards a proposition that can be expressed in this framework, and provide a preliminary look at the dynamics of the situation.
Age models and their uncertainties
NASA Astrophysics Data System (ADS)
Marwan, N.; Rehfeld, K.; Goswami, B.; Breitenbach, S. F. M.; Kurths, J.
2012-04-01
The usefulness of a proxy record is largely dictated by accuracy and precision of its age model, i.e., its depth-age relationship. Only if age model uncertainties are minimized correlations or lead-lag relations can be reliably studied. Moreover, due to different dating strategies (14C, U-series, OSL dating, or counting of varves), dating errors or diverging age models lead to difficulties in comparing different palaeo proxy records. Uncertainties in the age model are even more important if an exact dating is necessary in order to calculate, e.g., data series of flux or rates (like dust flux records, pollen deposition rates). Several statistical approaches exist to handle the dating uncertainties themselves and to estimate the age-depth relationship. Nevertheless, linear interpolation is still the most commonly used method for age modeling. The uncertainties of a certain event at a given time due to the dating errors are often even completely neglected. Here we demonstrate the importance of considering dating errors and implications for the interpretation of variations in palaeo-climate proxy records from stalagmites (U-series dated). We present a simple approach for estimating age models and their confidence levels based on Monte Carlo methods and non-linear interpolation. This novel algorithm also allows for removing age reversals. Our approach delivers a time series of a proxy record with a value range for each age depth also, if desired, on an equidistant time axis. The algorithm is implemented in interactive scripts for use with MATLAB®, Octave, and FreeMat.
Ozone Uncertainties Study Algorithm (OUSA)
NASA Technical Reports Server (NTRS)
Bahethi, O. P.
1982-01-01
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Ozone Uncertainties Study Algorithm (OUSA)
NASA Astrophysics Data System (ADS)
Bahethi, O. P.
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
The uncertainty of the atmospheric integrated water vapour estimated from GNSS observations
NASA Astrophysics Data System (ADS)
Ning, T.; Wang, J.; Elgered, G.; Dick, G.; Wickert, J.; Bradke, M.; Sommer, M.; Querel, R.; Smale, D.
2016-01-01
Within the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) there is a need for an assessment of the uncertainty in the integrated water vapour (IWV) in the atmosphere estimated from ground-based global navigation satellite system (GNSS) observations. All relevant error sources in GNSS-derived IWV are therefore essential to be investigated. We present two approaches, a statistical and a theoretical analysis, for the assessment of the uncertainty of the IWV. The method is valuable for all applications of GNSS IWV data in atmospheric research and weather forecast. It will be implemented to the GNSS IWV data stream for GRUAN in order to assign a specific uncertainty to each data point. In addition, specific recommendations are made to GRUAN on hardware, software, and data processing practices to minimise the IWV uncertainty. By combining the uncertainties associated with the input variables in the estimations of the IWV, we calculated the IWV uncertainties for several GRUAN sites with different weather conditions. The results show a similar relative importance of all uncertainty contributions where the uncertainties in the zenith total delay (ZTD) dominate the error budget of the IWV, contributing over 75 % of the total IWV uncertainty. The impact of the uncertainty associated with the conversion factor between the IWV and the zenith wet delay (ZWD) is proportional to the amount of water vapour and increases slightly for moist weather conditions. The GRUAN GNSS IWV uncertainty data will provide a quantified confidence to be used for the validation of other measurement techniques.
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
NASA Astrophysics Data System (ADS)
Tsai, Frank T.-C.; Elshall, Ahmed S.
2013-09-01
Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.
Interaction of loading pattern and nuclear data uncertainties in reactor core calculations
Klein, M.; Gallner, L.; Krzykacz-Hausmann, B.; Pautz, A.; Velkov, K.; Zwermann, W.
2012-07-01
Along with best-estimate calculations for design and safety analysis, understanding uncertainties is important to determine appropriate design margins. In this framework, nuclear data uncertainties and their propagation to full core calculations are a critical issue. To deal with this task, different error propagation techniques, deterministic and stochastic are currently developed to evaluate the uncertainties in the output quantities. Among these is the sampling based uncertainty and sensitivity software XSUSA which is able to quantify the influence of nuclear data covariance on reactor core calculations. In the present work, this software is used to investigate systematically the uncertainties in the power distributions of two PWR core loadings specified in the OECD UAM-Benchmark suite. With help of a statistical sensitivity analysis, the main contributors to the uncertainty are determined. Using this information a method is studied with which loading patterns of reactor cores can be optimized with regard to minimizing power distribution uncertainties. It is shown that this technique is able to halve the calculation uncertainties of a MOX/UOX core configuration. (authors)
Word learning under infinite uncertainty.
Blythe, Richard A; Smith, Andrew D M; Smith, Kenny
2016-06-01
Language learners must learn the meanings of many thousands of words, despite those words occurring in complex environments in which infinitely many meanings might be inferred by the learner as a word's true meaning. This problem of infinite referential uncertainty is often attributed to Willard Van Orman Quine. We provide a mathematical formalisation of an ideal cross-situational learner attempting to learn under infinite referential uncertainty, and identify conditions under which word learning is possible. As Quine's intuitions suggest, learning under infinite uncertainty is in fact possible, provided that learners have some means of ranking candidate word meanings in terms of their plausibility; furthermore, our analysis shows that this ranking could in fact be exceedingly weak, implying that constraints which allow learners to infer the plausibility of candidate word meanings could themselves be weak. This approach lifts the burden of explanation from 'smart' word learning constraints in learners, and suggests a programme of research into weak, unreliable, probabilistic constraints on the inference of word meaning in real word learners. PMID:26927884
Fuzzy-algebra uncertainty assessment
Cooper, J.A.; Cooper, D.K.
1994-12-01
A significant number of analytical problems (for example, abnormal-environment safety analysis) depend on data that are partly or mostly subjective. Since fuzzy algebra depends on subjective operands, we have been investigating its applicability to these forms of assessment, particularly for portraying uncertainty in the results of PRA (probabilistic risk analysis) and in risk-analysis-aided decision-making. Since analysis results can be a major contributor to a safety-measure decision process, risk management depends on relating uncertainty to only known (not assumed) information. The uncertainties due to abnormal environments are even more challenging than those in normal-environment safety assessments; and therefore require an even more judicious approach. Fuzzy algebra matches these requirements well. One of the most useful aspects of this work is that we have shown the potential for significant differences (especially in perceived margin relative to a decision threshold) between fuzzy assessment and probabilistic assessment based on subtle factors inherent in the choice of probability distribution models. We have also shown the relation of fuzzy algebra assessment to ``bounds`` analysis, as well as a description of how analyses can migrate from bounds analysis to fuzzy-algebra analysis, and to probabilistic analysis as information about the process to be analyzed is obtained. Instructive examples are used to illustrate the points.
Blade tip timing (BTT) uncertainties
NASA Astrophysics Data System (ADS)
Russhard, Pete
2016-06-01
Blade Tip Timing (BTT) is an alternative technique for characterising blade vibration in which non-contact timing probes (e.g. capacitance or optical probes), typically mounted on the engine casing (figure 1), and are used to measure the time at which a blade passes each probe. This time is compared with the time at which the blade would have passed the probe if it had been undergoing no vibration. For a number of years the aerospace industry has been sponsoring research into Blade Tip Timing technologies that have been developed as tools to obtain rotor blade tip deflections. These have been successful in demonstrating the potential of the technology, but rarely produced quantitative data, along with a demonstration of a traceable value for measurement uncertainty. BTT technologies have been developed under a cloak of secrecy by the gas turbine OEM's due to the competitive advantages it offered if it could be shown to work. BTT measurements are sensitive to many variables and there is a need to quantify the measurement uncertainty of the complete technology and to define a set of guidelines as to how BTT should be applied to different vehicles. The data shown in figure 2 was developed from US government sponsored program that bought together four different tip timing system and a gas turbine engine test. Comparisons showed that they were just capable of obtaining measurement within a +/-25% uncertainty band when compared to strain gauges even when using the same input data sets.
Quantification of uncertainties in composites
NASA Technical Reports Server (NTRS)
Liaw, D. G.; Singhal, S. N.; Murthy, P. L. N.; Chamis, Christos C.
1993-01-01
An integrated methodology is developed for computationally simulating the probabilistic composite material properties at all composite scales. The simulation requires minimum input consisting of the description of uncertainties at the lowest scale (fiber and matrix constituents) of the composite and in the fabrication process variables. The methodology allows the determination of the sensitivity of the composite material behavior to all the relevant primitive variables. This information is crucial for reducing the undesirable scatter in composite behavior at its macro scale by reducing the uncertainties in the most influential primitive variables at the micro scale. The methodology is computationally efficient. The computational time required by the methodology described herein is an order of magnitude less than that for Monte Carlo Simulation. The methodology has been implemented into the computer code PICAN (Probabilistic Integrated Composite ANalyzer). The accuracy and efficiency of the methodology/code are demonstrated by simulating the uncertainties in the heat-transfer, thermal, and mechanical properties of a typical laminate and comparing the results with the Monte Carlo simulation method and experimental data. The important observation is that the computational simulation for probabilistic composite mechanics has sufficient flexibility to capture the observed scatter in composite properties.
Communicating Uncertainties for Microwave-Based ESDRs
NASA Astrophysics Data System (ADS)
Wentz, F. J.; Mears, C. A.; Smith, D. K.
2011-12-01
Currently as part of NASA's MEaSUREs program, there is a 25-year archive of consistently-processed and carefully inter-calibrated Earth Science Data Records (ESDR) consisting of geophysical products derived from satellite microwave radiometers. These products include ocean surface temperature and wind speed, total atmospheric water vapor and cloud water, surface rain rate, and deep-layer averages of atmospheric temperature. The product retrievals are based on a radiative transfer model (RTM) for the surface and intervening atmosphere. Thus, the accuracy of the retrieved products depends both on the accuracy of the RTM, the accuracy of the measured brightness temperatures that serve as inputs to the retrieval algorithm, and on the accuracy of any ancillary data used to adjust for unmeasured geophysical conditions. In addition, for gridded products that are averages over time or space, sampling error can become important. It is important not only to calculate the uncertainties associated with the ESDRs but also to effectively communicate these uncertainties to the Users in a way that is helpful for their particular set of applications. This is a challenging task that will require a multi-faceted approach consisting of (1) error bars assigned to each retrieval, (2) detailed interactive validation reports, and (3) peer-reviewed scientific papers on long-term trends. All of this information needs to be linked to the ESDR's in a manner that facilitates integration into the User's applications. Our talk will discuss the progress we are making in implementing these approaches.
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko
2006-01-01
While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy
Jones, D.W.
2002-05-16
In previous reports, we have identified two potentially important issues, solutions to which would increase the attractiveness of DOE-developed technologies in commercial buildings energy systems. One issue concerns the fact that in addition to saving energy, many new technologies offer non-energy benefits that contribute to building productivity (firm profitability). The second issue is that new technologies are typically unproven in the eyes of decision makers and must bear risk premiums that offset cost advantages resulting from laboratory calculations. Even though a compelling case can be made for the importance of these issues, for building decision makers to incorporate them in business decisions and for DOE to use them in R&D program planning there must be robust empirical evidence of their existence and size. This paper investigates how such measurements could be made and offers recommendations as to preferred options. There is currently little systematic information on either of these concepts in the literature. Of the two there is somewhat more information on non-energy benefits, but little as regards office buildings. Office building productivity impacts can be observed casually, but must be estimated statistically, because buildings have many interacting attributes and observations based on direct behavior can easily confuse the process of attribution. For example, absenteeism can be easily observed. However, absenteeism may be down because a more healthy space conditioning system was put into place, because the weather was milder, or because firm policy regarding sick days had changed. There is also a general dearth of appropriate information for purposes of estimation. To overcome these difficulties, we propose developing a new data base and applying the technique of hedonic price analysis. This technique has been used extensively in the analysis of residential dwellings. There is also a literature on its application to commercial and industrial
Coping with model uncertainty in data assimilation using optimal mass transport
NASA Astrophysics Data System (ADS)
Ning, L.; Carli, F. P.; Ebtehaj, M.; Foufoula-Georgiou, E.; Georgiou, T.
2013-12-01
Most data assimilation methods address the problem of optimally combining model predictions with observations in the presence of zero-mean Gaussian random errors. However, in many hydro-meteorological applications uncertainty in model parameters and/or model structure often result in systematic errors (bias). Examples include the prediction of precipitation or land surface fluxes at the wrong location and/or timing due to a drift in the model, unknown initial conditions, or non-additive error amplification. Existing bias-aware data assimilation methods require characterization of the bias in terms of a well-defined set of parameters or removal of bias, which is not always feasible. Here we present a new variational data assimilation framework to cope with model bias in a non-parametric fashion via an appropriate 'regularization' of the state evolution dynamics. In the context of weak-constraint 4D-VAR, our method can be seen as enforcing a minimum nonlinear distance (regularization or correction) in the evolution of the state so as to reconcile measurements with errors in the model dynamics. While a quadratic functional is typically sufficient to quantify errors in measurements, errors in state evolution is most naturally quantified by a transportation metric (Wasserstein metric) originating in the theory of Optimal Mass Transport (OMT). The proposed framework allows the use of additional regularization functionals, such as the L1-norm regularization of the state in an appropriately chosen domain, as recently introduced by the authors for states that exhibit sparsity and non-Gaussian priors, such as precipitation and soil moisture. We demonstrate the performance of the proposed method using as an example the 1-D and 2-D advection diffusion equation with systematic errors in the velocity and diffusivity parameters. Extension to real world data assimilation settings is currently under way.
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
NASA Astrophysics Data System (ADS)
Carlsson, B. D.; Ekström, A.; Forssén, C.; Strömberg, D. Fahlin; Jansen, G. R.; Lilja, O.; Lindby, M.; Mattsson, B. A.; Wendt, K. A.
2016-01-01
Chiral effective field theory (χ EFT ) provides a systematic approach to describe low-energy nuclear forces. Moreover, χ EFT is able to provide well-founded estimates of statistical and systematic uncertainties—although this unique advantage has not yet been fully exploited. We fill this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous fit to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of χ EFT . Finally, we study the effect on other observables by demonstrating forward-error-propagation methods that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain efficient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to χ EFT , and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling, showing that statistical errors are, in general, small compared to systematic ones. In conclusion, we find that a simultaneous fit to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in χ EFT . Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector, in particular when varying the cutoff in the chiral potentials. The methodology and results presented in this paper open a new frontier for uncertainty quantification in ab initio nuclear theory.
NASA Astrophysics Data System (ADS)
Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank
2016-05-01
Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.
Uncertainty Quantification in Climate Modeling
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis
Systematics and the biodiversity crisis
Savage, J.M.
1995-11-01
This article discusses the importance of systematics in evaluating the global biodiversity crisis. Topics covered include the following: what systematic biology is; the diversity of species and higher taxa; biodiversity undersiege; systematics and conservation; systematics and global climatic change. 28 refs., 2 figs., 1 tab.
Two new kinds of uncertainty relations
NASA Technical Reports Server (NTRS)
Uffink, Jos
1994-01-01
We review a statistical-geometrical and a generalized entropic approach to the uncertainty principle. Both approaches provide a strengthening and generalization of the standard Heisenberg uncertainty relations, but in different directions.
Minimal length uncertainty and accelerating universe
NASA Astrophysics Data System (ADS)
Farmany, A.; Mortazavi, S. S.
2016-06-01
In this paper, minimal length uncertainty is used as a constraint to solve the Friedman equation. It is shown that, based on the minimal length uncertainty principle, the Hubble scale is decreasing which corresponds to an accelerating universe.
NASA Astrophysics Data System (ADS)
Car, Nicholas; Cox, Simon; Fitch, Peter
2015-04-01
With earth-science datasets increasingly being published to enable re-use in projects disassociated from the original data acquisition or generation, there is an urgent need for associated metadata to be connected, in order to guide their application. In particular, provenance traces should support the evaluation of data quality and reliability. However, while standards for describing provenance are emerging (e.g. PROV-O), these do not include the necessary statistical descriptors and confidence assessments. UncertML has a mature conceptual model that may be used to record uncertainty metadata. However, by itself UncertML does not support the representation of uncertainty of multi-part datasets, and provides no direct way of associating the uncertainty information - metadata in relation to a dataset - with dataset objects.We present a method to address both these issues by combining UncertML with PROV-O, and delivering resulting uncertainty-enriched provenance traces through the Linked Data API. UncertProv extends the PROV-O provenance ontology with an RDF formulation of the UncertML conceptual model elements, adds further elements to support uncertainty representation without a conceptual model and the integration of UncertML through links to documents. The Linked ID API provides a systematic way of navigating from dataset objects to their UncertProv metadata and back again. The Linked Data API's 'views' capability enables access to UncertML and non-UncertML uncertainty metadata representations for a dataset. With this approach, it is possible to access and navigate the uncertainty metadata associated with a published dataset using standard semantic web tools, such as SPARQL queries. Where the uncertainty data follows the UncertML model it can be automatically interpreted and may also support automatic uncertainty propagation . Repositories wishing to enable uncertainty propagation for all datasets must ensure that all elements that are associated with uncertainty
Integrating uncertainty and interindividual variability in environmental risk assessment.
Bogen, K T; Spear, R C
1987-12-01
An integrated, quantitative approach to incorporating both uncertainty and interindividual variability into risk prediction models is described. Individual risk R is treated as a variable distributed in both an uncertainty dimension and a variability dimension, whereas population risk I (the number of additional cases caused by R) is purely uncertain. I is shown to follow a compound Poisson-binomial distribution, which in low-level risk contexts can often be approximated well by a corresponding compound Poisson distribution. The proposed analytic framework is illustrated with an application to cancer risk assessment for a California population exposed to 1,2-dibromo-3-chloropropane from ground water. PMID:3444930
Brief review of uncertainty quantification for particle image velocimetry
NASA Astrophysics Data System (ADS)
Farias, M. H.; Teixeira, R. S.; Koiller, J.; Santos, A. M.
2016-07-01
Metrological studies for particle image velocimetry (PIV) are recent in literature. An attempt to evaluate the uncertainty quantifications (UQ) of the PIV velocity field are in evidence. Therefore, a short review on main sources of uncertainty in PIV and available methodologies for its quantification are presented. In addition, the potential of some mathematical techniques, coming from the area of geometric mechanics and control, that could interest the fluids UQ community are highlighted as good possibilities. “We must measure what is measurable and make measurable what cannot be measured” (Galileo)
Measuring, Estimating, and Deciding under Uncertainty.
Michel, Rolf
2016-03-01
The problem of uncertainty as a general consequence of incomplete information and the approach to quantify uncertainty in metrology is addressed. Then, this paper discusses some of the controversial aspects of the statistical foundation of the concepts of uncertainty in measurements. The basics of the ISO Guide to the Expression of Uncertainty in Measurement as well as of characteristic limits according to ISO 11929 are described and the needs for a revision of the latter standard are explained. PMID:26688360
Optimization of environmental water purchases with uncertainty
NASA Astrophysics Data System (ADS)
Hollinshead, Sarah P.; Lund, Jay R.
2006-08-01
Water managers are turning increasingly to market solutions to meet new environmental demands for water in fully allocated systems. This paper presents a three-stage probabilistic optimization model that identifies least cost strategies for staged seasonal water purchases for an environmental water acquisition program given hydrologic, operational, and biological uncertainties. Multistage linear programming is used to minimize the expected cost of long-term, spot, and option water purchases used to meet uncertain environmental demands. Results prescribe the location, timing, and type of optimal water purchases and illustrate how least cost strategies change as information becomes available during the year. Results also provide sensitivity analysis, including shadow values that estimate the expected cost of additional dedicated environmental water. The model's application to California's Environmental Water Account is presented with a discussion of its utility for planning and policy purposes. Model limitations and sensitivity analysis are discussed, as are operational and research recommendations.
Risk Analysis and Uncertainty: Implications for Counselling
ERIC Educational Resources Information Center
Hassenzahl, David
2004-01-01
Over the past two decades, the risk analysis community has made substantial advances in understanding and describing uncertainty. Uncertainty is ubiquitous, complex, both quantitative and qualitative in nature, and often irreducible. Uncertainty thus creates a challenge when using risk analysis to evaluate the rationality of group and individual…
Quantum mechanics and the generalized uncertainty principle
Bang, Jang Young; Berger, Micheal S.
2006-12-15
The generalized uncertainty principle has been described as a general consequence of incorporating a minimal length from a theory of quantum gravity. We consider a simple quantum mechanical model where the operator corresponding to position has discrete eigenvalues and show how the generalized uncertainty principle results for minimum uncertainty wave packets.
Regarding Uncertainty in Teachers and Teaching
ERIC Educational Resources Information Center
Helsing, Deborah
2007-01-01
The literature on teacher uncertainty suggests that it is a significant and perhaps inherent feature of teaching. Yet there is disagreement about the effects of these uncertainties on teachers as well as about the ways that teachers should regard them. Recognition of uncertainties can be viewed alternatively as a liability or an asset to effective…
Shannon Revisited: Information in Terms of Uncertainty.
ERIC Educational Resources Information Center
Cole, Charles
1993-01-01
Discusses the meaning of information in terms of Shannon's mathematical theory of communication and the concept of uncertainty. The uncertainty associated with the transmission of the signal is argued to have more significance for information science than the uncertainty associated with the selection of a message from a set of possible messages.…
NASA Astrophysics Data System (ADS)
Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig
2009-08-01
Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution
Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.
1994-12-01
This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete.
WE-B-19A-01: SRT II: Uncertainties in SRT
Dieterich, S; Schlesinger, D; Geneser, S
2014-06-15
SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood
A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals
NASA Astrophysics Data System (ADS)
Tang, Q.; Xie, S.; Chen, X.; Zhao, C.
2014-12-01
The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases
Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov
2012-11-01
Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.
Uncertainty Quantification of the Eigensystem Realization Algorithm Using the Unscented Transform
NASA Astrophysics Data System (ADS)
Diz, Martin; Majji, Manoranjan; Singla, Puneet
2013-12-01
The Unscented Transformation (UT) is used in this paper to provide a systematic procedure to perform identified model parameter uncertainty quantification. The statistics of signal space uncertainties (input output I/O experimental data) are first mapped into the I/O model space using the Observer Kalman/Filter Identification (OKID) theory. The statistics of the state space model parameters are then computed by an application of the unscented transform on the statistics of the system Markov parameters. Numerical simulations and comparisons with Monte-Carlo error statistics demonstrate the efficacy of UT for model parameter uncertainty calculations presented in this paper. The algorithm is applied to quantify uncertainties associated with the pitch dynamics model of a 2 degree of freedom helicopter model. It is also applied to quantify the error statistics associated with the dynamics of a flexible beam.
Development and Uncertainty Analysis of an Automatic Testing System for Diffusion Pump Performance
NASA Astrophysics Data System (ADS)
Zhang, S. W.; Liang, W. S.; Zhang, Z. J.
A newly developed automatic testing system used in laboratory for diffusion pump performance measurement is introduced in this paper. By using two optical fiber sensors to indicate the oil level in glass-buret and a needle valve driven by a stepper motor to regulate the pressure in the test dome, the system can automatically test the ultimate pressure and pumping speed of a diffusion pump in accordance with ISO 1608. The uncertainty analysis theory is applied to analyze pumping speed measurement results. Based on the test principle and system structure, it is studied how much influence each component and test step contributes to the final uncertainty. According to differential method, the mathematical model for systematic uncertainty transfer function is established. Finally, by case study, combined uncertainties of manual operation and automatic operation are compared with each other (6.11% and 5.87% respectively). The reasonableness and practicality of this newly developed automatic testing system is proved.
Systematic Underreproduction of Time Is Independent of Judgment Certainty
Riemer, Martin; Rhodes, Darren; Wolbers, Thomas
2016-01-01
We recently proposed that systematic underreproduction of time is caused by a general judgment bias towards earlier responses, instead of reflecting a genuine misperception of temporal intervals. Here we tested whether this bias can be explained by the uncertainty associated with temporal judgments. We applied transcranial magnetic stimulation (TMS) to inhibit neuronal processes in the right posterior parietal cortex (PPC) and tested its effects on time discrimination and reproduction tasks. The results show increased certainty for discriminative time judgments after PPC inhibition. They suggest that the right PPC plays an inhibitory role for time perception, possibly by mediating the multisensory integration between temporal stimuli and other quantities. Importantly, this increased judgment certainty had no influence on the degree of temporal underreproduction. We conclude that the systematic underreproduction of time is not caused by uncertainty for temporal judgments. PMID:26881127
Distinguishing Systemic from Systematic.
ERIC Educational Resources Information Center
Carr, Alison A.
1996-01-01
Describes the difference between systemic and systematic as they relate to school reform and instructional design. Highlights include a history of systems theory; systems engineering; instructional systems design; systemic versus reductionist thinking; social systems; and systemic change in education, including power relationships. (LRW)
Nuclear Charge Radii Systematics
Marinova, Krassimira
2015-09-15
This paper is a brief overview of the existing systematics on nuclear mean square charge radii, obtained by a combined analysis of data from different types of experiment. The various techniques yielding data on nuclear charge radii are summarized. Their specific feature complexities and the accuracy and precision of the obtained information are also discussed.
Connectionist Semantic Systematicity
ERIC Educational Resources Information Center
Frank, Stefan L.; Haselager, Willem F. G.; van Rooij, Iris
2009-01-01
Fodor and Pylyshyn [Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. "Cognition," 28, 3-71] argue that connectionist models are not able to display systematicity other than by implementing a classical symbol system. This claim entails that connectionism cannot compete with the classical…
REDD+ emissions estimation and reporting: dealing with uncertainty
NASA Astrophysics Data System (ADS)
Pelletier, Johanne; Martin, Davy; Potvin, Catherine
2013-09-01
The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology
Uncertainties in debris growth predictions
McKnight, D.S. )
1991-01-10
The growth of artificial space debris in Earth orbit may pose a significant hazard to satellites in the future though the collision hazard to operational spacecraft is presently manageable. The stability of the environment is dependent on the growth of debris from satellite deployment, mission operations and fragmentation events. Growth trends of the trackable on-orbit population are investigated highlighting the complexities and limitations of using the data that supports this modeling. The debris produced by breakup events may be a critical aspect of the present and future environment. As a result, growth predictions produced using existing empirically-based models may have large, possibly even unacceptable, uncertainties.
Buoyancy contribution to uncertainty of mass, conventional mass and force
NASA Astrophysics Data System (ADS)
Malengo, Andrea; Bich, Walter
2016-04-01
The conventional mass is a useful concept introduced to reduce the impact of the buoyancy correction in everyday mass measurements, thus avoiding in most cases its accurate determination, necessary in measurements of ‘true’ mass. Although usage of conventional mass is universal and standardized, the concept is considered as a sort of second-choice tool, to be avoided in high-accuracy applications. In this paper we show that this is a false belief, by elucidating the role played by covariances between volume and mass and between volume and conventional mass at the various stages of the dissemination chain and in the relationship between the uncertainties of mass and conventional mass. We arrive at somewhat counter-intuitive results: the volume of the transfer standard plays a comparatively minor role in the uncertainty budget of the standard under calibration. In addition, conventional mass is preferable to mass in normal, in-air operation, as its uncertainty is smaller than that of mass, if covariance terms are properly taken into account, and the uncertainty over-stating (typically) resulting from neglecting them is less severe than that (always) occurring with mass. The same considerations hold for force. In this respect, we show that the associated uncertainty is the same using mass or conventional mass, and, again, that the latter is preferable if covariance terms are neglected.
Uncertainty of temperature measurement with thermal cameras
NASA Astrophysics Data System (ADS)
Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw
2001-06-01
All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.
Uncertainty Calculation for Spectral-Responsivity Measurements
Lehman, John H; Wang, CM; Dowell, Marla L; Hadler, Joshua A
2009-01-01
This paper discusses a procedure for measuring the absolute spectral responsivity of optical-fiber power meters and computation of the calibration uncertainty. The procedure reconciles measurement results associated with a monochromator-based measurement system with those obtained with laser sources coupled with optical fiber. Relative expanded uncertainties based on the methods from the Guide to the Expression of Uncertainty in Measurement and from Supplement 1 to the “Guide to the Expression of Uncertainty in Measurement”-Propagation of Distributions using a Monte Carlo Method are derived and compared. An example is used to illustrate the procedures and calculation of uncertainties.
NASA Astrophysics Data System (ADS)
Dupuis, Jan; Kuhlmann, Heiner
2014-06-01
Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.
ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES
Dolphin, Andrew E.
2013-09-20
The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.
Fission cross section uncertainties with the NIFFTE TPC
NASA Astrophysics Data System (ADS)
Sangiorgio, Samuele; Niffte Collaboration
2014-09-01
Nuclear data such as neutron-induced fission cross sections play a fundamental role in nuclear energy and defense applications. In recent years, understanding of these systems has become increasingly dependent upon advanced simulation and modeling, where uncertainties in nuclear data propagate in the expected performances of existing and future systems. It is important therefore that uncertainties in nuclear data are minimized and fully understood. For this reason, the Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) uses a Time Projection Chamber (TPC) to measure energy-differential (n,f) cross sections with unprecedented precision. The presentation will discuss how the capabilities of the NIFFTE TPC allow to directly measures systematic uncertainties in fission cross sections, in particular for what concerns fission-fragment identification, and target and beam uniformity. Preliminary results from recent analysis of 238U/235U and 239Pu/235U data collected with the TPC will be presented. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft.
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
NASA Astrophysics Data System (ADS)
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-03-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions.
Kennedy, Marc C; van der Voet, Hilko; Roelofs, Victoria J; Roelofs, Willem; Glass, C Richard; de Boer, Waldo J; Kruisselbrink, Johannes W; Hart, Andy D M
2015-05-01
Risk assessments for human exposures to plant protection products (PPPs) have traditionally focussed on single routes of exposure and single compounds. Extensions to estimate aggregate (multi-source) and cumulative (multi-compound) exposure from PPPs present many new challenges and additional uncertainties that should be addressed as part of risk analysis and decision-making. A general approach is outlined for identifying and classifying the relevant uncertainties and variabilities. The implementation of uncertainty analysis within the MCRA software, developed as part of the EU-funded ACROPOLIS project to address some of these uncertainties, is demonstrated. An example is presented for dietary and non-dietary exposures to the triazole class of compounds. This demonstrates the chaining of models, linking variability and uncertainty generated from an external model for bystander exposure with variability and uncertainty in MCRA dietary exposure assessments. A new method is also presented for combining pesticide usage survey information with limited residue monitoring data, to address non-detect uncertainty. The results show that incorporating usage information reduces uncertainty in parameters of the residue distribution but that in this case quantifying uncertainty is not a priority, at least for UK grown crops. A general discussion of alternative approaches to treat uncertainty, either quantitatively or qualitatively, is included. PMID:25688423
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
The legal status of uncertainty
NASA Astrophysics Data System (ADS)
Ferraris, L.; Miozzo, D.
2009-09-01
Authorities of civil protection are giving extreme importance to the scientific assessment throughout the widespread use of mathematical models that have been implemented in order to prevent and mitigate the effect of natural hazards. These models, however, are far from deterministic; moreover, the uncertainty that characterizes them plays an important role in the scheme of prevention of natural hazards. We are, in fact, presently experiencing a detrimental increase of legal actions taken against the authorities of civil protection whom, relying on the forecasts of mathematical models, fail in protecting the population. It is our profound concern that civilians have granted the right of being protected by any means, and at the same extent, from natural hazards and from the fallacious behaviour of whom should grant individual safety. But, at the same time, a dangerous overcriminalization could have a negative impact on the Civil Protection system inducing a dangerous defensive behaviour which is costly and ineffective. A few case studies are presented in which the role of uncertainty, in numerical predictions, is made evident and discussed. Scientists, thus, need to help policymakers to agree on sound procedures that must recognize the real level of unpredictability. Hence, we suggest the creation of an international and interdisciplinary committee, with the scope of having politics, jurisprudence and science communicate, to find common solutions to a common problem.
Entropic uncertainty relations under the relativistic motion
NASA Astrophysics Data System (ADS)
Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng
2013-10-01
The uncertainty principle bounds our ability to simultaneously predict two incompatible observables of a quantum particle. Assisted by a quantum memory to store the particle, this uncertainty could be reduced and quantified by a new Entropic Uncertainty Relation (EUR). In this Letter, we explore how the relativistic motion of the system would affect the EUR in two sample scenarios. First, we show that the Unruh effect of an accelerating particle would surely increase the uncertainty if the system and particle entangled initially. On the other hand, the entanglement could be generated from nonuniform motion once the Unruh decoherence is prevented by utilizing the cavity. We show that, in a uncertainty game between an inertial cavity and a nonuniformly accelerated one, the uncertainty evolves periodically with respect to the duration of acceleration segment. Therefore, with properly chosen cavity parameters, the uncertainty bound could be protected. Implications of our results for gravitation are also discussed.
Entropic uncertainty relation in de Sitter space
NASA Astrophysics Data System (ADS)
Jia, Lijuan; Tian, Zehua; Jing, Jiliang
2015-02-01
The uncertainty principle restricts our ability to simultaneously predict the measurement outcomes of two incompatible observables of a quantum particle. However, this uncertainty could be reduced and quantified by a new Entropic Uncertainty Relation (EUR). By the open quantum system approach, we explore how the nature of de Sitter space affects the EUR. When the quantum memory A freely falls in the de Sitter space, we demonstrate that the entropic uncertainty acquires an increase resulting from a thermal bath with the Gibbons-Hawking temperature. And for the static case, we find that the temperature coming from both the intrinsic thermal nature of the de Sitter space and the Unruh effect associated with the proper acceleration of A also brings effect on entropic uncertainty, and the higher the temperature, the greater the uncertainty and the quicker the uncertainty reaches the maximal value. And finally the possible mechanism behind this phenomenon is also explored.
Quantifying Uncertainty in Wetland Change Mapping for Alaska: Part I - Uncertainty in Input SAR Data
NASA Astrophysics Data System (ADS)
Clewley, D.; Whitcomb, J.; Bunting, P.; Moghaddam, M.; Chapman, B. D.; McDonald, K. C.
2013-12-01
datasets, with improvements expected from the PALSAR data due to the addition of the HV channel and improved radiometric accuracy. The inclusion of ancillary layers such as SAR texture, slope and distance to water in the final classification will aid aid separation of classes. The uncertainty caused by the radiometric differences on the overall change product will depend on the method used for change detection, which is the subject of future research.
[Food additives and healthiness].
Heinonen, Marina
2014-01-01
Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects. PMID:24772784
Polyimide processing additives
NASA Technical Reports Server (NTRS)
Pratt, J. R.; St. Clair, T. L.; Burks, H. D.; Stoakley, D. M.
1987-01-01
A method has been found for enhancing the melt flow of thermoplastic polyimides during processing. A high molecular weight 422 copoly(amic acid) or copolyimide was fused with approximately 0.05 to 5 pct by weight of a low molecular weight amic acid or imide additive, and this melt was studied by capillary rheometry. Excellent flow and improved composite properties on graphite resulted from the addition of a PMDA-aniline additive to LARC-TPI. Solution viscosity studies imply that amic acid additives temporarily lower molecular weight and, hence, enlarge the processing window. Thus, compositions containing the additive have a lower melt viscosity for a longer time than those unmodified.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.
2015-01-15
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a give