Planck 2013 results. III. LFI systematic uncertainties
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dick, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Kangaslahti, P.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Osborne, S.; Paci, F.; Pagano, L.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Ricciardi, S.; Riller, T.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
We present the current estimate of instrumental and systematic effect uncertainties for the Planck-Low Frequency Instrument relevant to the first release of the Planck cosmological results. We give an overview of the main effects and of the tools and methods applied to assess residuals in maps and power spectra. We also present an overall budget of known systematic effect uncertainties, which are dominated by sidelobe straylight pick-up and imperfect calibration. However, even these two effects are at least two orders of magnitude weaker than the cosmic microwave background fluctuations as measured in terms of the angular temperature power spectrum. A residual signal above the noise level is present in the multipole range ℓ < 20, most notably at 30 GHz, and is probably caused by residual Galactic straylight contamination. Current analysis aims to further reduce the level of spurious signals in the data and to improve the systematic effects modelling, in particular with respect to straylight and calibration uncertainties.
On LBNE neutrino flux systematic uncertainties
NASA Astrophysics Data System (ADS)
Lebrun, Paul L. G.; Hylen, James; Marchionni, Alberto; Fields, Laura; Bashyal, Amit; Park, Seongtae; Watson, Blake
2015-10-01
The systematic uncertainties in the neutrino flux of the Long-Baseline Neutrino Experiment, due to alignment uncertanties and tolerances of the neutrino beamline components, are estimated. In particular residual systematics are evaluated in the determination of the neutrino flux at the far detector, assuming that the experiment will be equipped with a near detector with the same target material of the far detector, thereby canceling most of the uncertainties from hadroproduction and neutrino cross sections. This calculation is based on a detailed Geant4-based model of the neutrino beam line that includes the target, two focusing horns, the decay pipe and ancillary items, such as shielding.
On LBNE neutrino flux systematic uncertainties
Lebrun, Paul L. G.; Hylen, James; Marchionni, Alberto; Fields, Laura; Bashyal, Amit; Park, Seongtae; Watson, Blake
2015-10-15
The systematic uncertainties in the neutrino flux of the Long-Baseline Neutrino Experiment, due to alignment uncertanties and tolerances of the neutrino beamline components, are estimated. In particular residual systematics are evaluated in the determination of the neutrino flux at the far detector, assuming that the experiment will be equipped with a near detector with the same target material of the far detector, thereby canceling most of the uncertainties from hadroproduction and neutrino cross sections. This calculation is based on a detailed Geant4-based model of the neutrino beam line that includes the target, two focusing horns, the decay pipe and ancillary items, such as shielding.
Quantifying systematic uncertainties in supernova cosmology
Nordin, Jakob; Goobar, Ariel; Joensson, Jakob E-mail: ariel@physto.se
2008-02-15
Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.
UCNA Systematic Uncertainties: Developments in Analysis and Method
NASA Astrophysics Data System (ADS)
Zeck, Bryan
2012-10-01
The UCNA experiment is an effort to measure the beta-decay asymmetry parameter A of the correlation between the electron momentum and the neutron spin, using bottled polarized ultracold neutrons in a homogenous 1 T magnetic field. Continued improvements in both analysis and method are helping to push the measurement uncertainty to the limits of the current statistical sensitivity (less than 0.4%). The implementation of thinner decay trap windows will be discussed, as will the use of a tagged beta particle calibration source to measure angle-dependent scattering effects and energy loss. Additionally, improvements in position reconstruction and polarization measurements using a new shutter system will be introduced. A full accounting of the current systematic uncertainties will be given.
ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES
Dolphin, Andrew E.
2012-05-20
In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.
Systematic uncertainties from halo asphericity in dark matter searches
Bernal, Nicolás; Forero-Romero, Jaime E.; Garani, Raghuveer; Palomares-Ruiz, Sergio E-mail: je.forero@uniandes.edu.co E-mail: sergio.palomares.ruiz@ific.uv.es
2014-09-01
Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm {sup 3}. Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Calculation of the detection limit in radiation measurements with systematic uncertainties
NASA Astrophysics Data System (ADS)
Kirkpatrick, J. M.; Russ, W.; Venkataraman, R.; Young, B. M.
2015-06-01
The detection limit (LD) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
The management of rivers for improving safety, shipping and environment requires conscious effort on the part of river managers. River engineers design hydraulic works to tackle various challenges, from increasing flow conveyance to ensuring minimal water depths for environmental flow and inland shipping. Last year saw the completion of such large scale river engineering in the 'Room for the River' programme for the Dutch Rhine River system, in which several dozen of human interventions were built to increase flood safety. Engineering works in rivers are not completed in isolation from society. Rather, their benefits - increased safety, landscaping beauty - and their disadvantages - expropriation, hindrance - directly affect inhabitants. Therefore river managers are required to carefully defend their plans. The effect of engineering works on river dynamics is being evaluated using hydraulic river models. Two-dimensional numerical models based on the shallow water equations provide the predictions necessary to make decisions on designs and future plans. However, like all environmental models, these predictions are subject to uncertainty. In recent years progress has been made in the identification of the main sources of uncertainty for hydraulic river models. Two of the most important sources are boundary conditions and hydraulic roughness (Warmink et al. 2013). The result of these sources of uncertainty is that the identification of single, deterministic prediction model is a non-trivial task. This is this is a well-understood problem in other fields as well - most notably hydrology - and known as equifinality. However, the particular case of human intervention modelling with hydraulic river models compounds the equifinality case. The model that provides the reference baseline situation is usually identified through calibration and afterwards modified for the engineering intervention. This results in two distinct models, the evaluation of which yields the effect of
A Systematic Procedure for Assigning Uncertainties to Data Evaluations
Younes, W
2007-02-20
In this report, an algorithm that automatically constructs an uncertainty band around any evaluation curve is described. Given an evaluation curve and a corresponding set of experimental data points with x and y error bars, the algorithm expands a symmetric region around the evaluation curve until 68.3% of a set of points, randomly sampled from the experimental data, fall within the region. For a given evaluation curve, the region expanded in this way represents, by definition, a one-standard-deviation interval about the evaluation that accounts for the experimental data. The algorithm is tested against several benchmarks, and is shown to be well-behaved, even when there are large gaps in the available experimental data. The performance of the algorithm is assessed quantitatively using the tools of statistical-inference theory.
McNamara, C; Mehegan, J; O'Mahony, C; Safford, B; Smith, B; Tennant, D; Buck, N; Ehrlich, V; Sardi, M; Haldemann, Y; Nordmann, H; Jasti, P R
2011-12-01
The feasibility of using a retailer fidelity card scheme to estimate food additive intake was investigated in an earlier study. Fidelity card survey information was combined with information provided by the retailer on levels of the food colour Sunset Yellow (E110) in the foods to estimate a daily exposure to the additive in the Swiss population. As with any dietary exposure method the fidelity card scheme is subject to uncertainties and in this paper the impact of uncertainties associated with input variables including the amounts of food purchased, the levels of E110 in food, the proportion of food purchased at the retailer, the rate of fidelity card usage, the proportion of foods consumed outside of the home and bodyweights and with systematic uncertainties was assessed using a qualitative, deterministic and probabilistic approach. An analysis of the sensitivity of the results to each of the probabilistic inputs was also undertaken. The analysis identified the key factors responsible for uncertainty within the model and demonstrated how the application of some simple probabilistic approaches can be used quantitatively to assess uncertainty. PMID:21995790
Systematic uncertainties in long-baseline neutrino oscillations for large θ₁₃
Coloma, Pilar; Huber, Patrick; Kopp, Joachim; Winter, Walter
2013-02-01
We study the physics potential of future long-baseline neutrino oscillation experiments at large θ₁₃, focusing especially on systematic uncertainties. We discuss superbeams, \\bbeams, and neutrino factories, and for the first time compare these experiments on an equal footing with respect to systematic errors. We explicitly simulate near detectors for all experiments, we use the same implementation of systematic uncertainties for all experiments, and we fully correlate the uncertainties among detectors, oscillation channels, and beam polarizations as appropriate. As our primary performance indicator, we use the achievable precision in the measurement of the CP violating phase $\\deltacp$. We find that a neutrino factory is the only instrument that can measure $\\deltacp$ with a precision similar to that of its quark sector counterpart. All neutrino beams operating at peak energies ≳2 GeV are quite robust with respect to systematic uncertainties, whereas especially \\bbeams and \\thk suffer from large cross section uncertainties in the quasi-elastic regime, combined with their inability to measure the appearance signal cross sections at the near detector. A noteworthy exception is the combination of a γ =100 \\bbeam with an \\spl-based superbeam, in which all relevant cross sections can be measured in a self-consistent way. This provides a performance, second only to the neutrino factory. For other superbeam experiments such as \\lbno and the setups studied in the context of the \\lbne reconfiguration effort, statistics turns out to be the bottleneck. In almost all cases, the near detector is not critical to control systematics since the combined fit of appearance and disappearance data already constrains the impact of systematics to be small provided that the three active flavor oscillation framework is valid.
Scolnic, D.; Riess, A.; Brout, D.; Rodney, S.; Rest, A.; Huber, M. E.; Tonry, J. L.; Foley, R. J.; Chornock, R.; Berger, E.; Soderberg, A. M.; Stubbs, C. W.; Kirshner, R. P.; Challis, P.; Czekala, I.; Drout, M.; Narayan, G.; Smartt, S. J.; Botticella, M. T.; Schlafly, E.; and others
2014-11-01
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w =−1.120{sub −0.206}{sup +0.360}(Stat){sub −0.291}{sup +0.269}(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H {sub 0} measurements, we find w=−1.166{sub −0.069}{sup +0.072} and Ω{sub m}=0.280{sub −0.012}{sup +0.013} (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = –1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: w{sub BAO+H0+SN+WMAP}=−1.124{sub −0.065}{sup +0.083}.
Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression
Gilbert, E.S.
1995-12-01
In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Köhlinger, F.; Hoekstra, H.; Eriksen, M.
2015-11-01
Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.
NASA Astrophysics Data System (ADS)
Cowan, Nicholas; Levy, Peter; Skiba, Ute
2016-04-01
The addition of reactive nitrogen to agricultural soils in the form of artificial fertilisers or animal waste is the largest global source of anthropogenic N2O emissions. Emission factors are commonly used to evaluate N2O emissions released after the application of nitrogen fertilisers on a global scale based on records of fertiliser use. Currently these emission factors are estimated primarily by a combination of results of experiments in which flux chamber methodology is used to estimate annual cumulative fluxes of N2O after nitrogen fertiliser applications on agricultural soils. The use of the eddy covariance method to measure N2O and estimate emission factors is also becoming more common in the flux community as modern rapid gas analyser instruments advance. The aim of the presentation is to highlight the weaknesses and potential systematic biases in current flux measurement methodology. This is important for GHG accounting and for accurate model calibration and verification. The growing interest in top-down / bottom-up comparisons of tall tower and conventional N2O flux measurements is also an area of research in which the uncertainties in flux measurements needs to be properly quantified. The large and unpredictable spatial and temporal variability of N2O fluxes from agricultural soils leads to a significant source of uncertainty in emission factor estimates. N2O flux measurements typically show poor relationships with explanatory co-variates. The true uncertainties in flux measurements at the plot scale are often difficult to propagate to field scale and the annual time scale. This results in very uncertain cumulative flux (emission factor) estimates. Cumulative fluxes estimated using flux chamber and eddy covariance methods can also differ significantly which complicates the matter further. In this presentation, we examine some effects that spatial and temporal variability of N2O fluxes can have on the estimation of emission factors and describe how
InSAR bias and uncertainty due to the systematic and stochastic tropospheric delay
NASA Astrophysics Data System (ADS)
Fattahi, Heresh; Amelung, Falk
2015-12-01
We quantify the bias and uncertainty of interferometric synthetic aperture radar (InSAR) displacement time series and their derivatives, the displacement velocities, by analyzing the systematic and stochastic components of the temporal variation of the tropospheric delay. The biases due to the systematic seasonal delay depend on the SAR acquisition times, whereas the uncertainties depend on the standard deviation of the random delay, the number of acquisitions, the total time span covered, and the covariance of the time series of the stochastic delay between a pixel and the reference. We study the contribution of the wet delay to the InSAR observations along the western India plate boundary using (i) Moderate Resolution Imaging Spectroradiometer precipitable water vapor, (ii) stratified tropospheric delay estimated from the ERA-I global atmospheric model, and (iii) seven Envisat InSAR swaths. Our analysis indicates that the amplitudes of the annual delay vary by up to ~10 cm in this region equivalent to a maximum displacement bias of ~24 cm in InSAR line of sight direction between two epochs (assuming Envisat IS6 beam mode). The stratified tropospheric delay correction mitigates this bias and reduces the scatter due to the stochastic delay. For ~7 years of Envisat acquisitions along the western India plate boundary, the uncertainty of the InSAR velocity field due to the residual stochastic wet delay after stratified tropospheric delay correction using the ERA-I model is in the order of ~2 mm/yr over 100 km and ~4 mm/yr over 400 km. We discuss the implication of the derived uncertainties on the full variance-covariance matrix of the InSAR data.
Single-Ion Atomic Clock with 3 ×10-18 Systematic Uncertainty
NASA Astrophysics Data System (ADS)
Huntemann, N.; Sanner, C.; Lipphardt, B.; Tamm, Chr.; Peik, E.
2016-02-01
We experimentally investigate an optical frequency standard based on the 2S1/2 (F =0 )→ 2F7/2 (F =3 ) electric octupole (E 3 ) transition of a single trapped 171Yb+ ion. For the spectroscopy of this strongly forbidden transition, we utilize a Ramsey-type excitation scheme that provides immunity to probe-induced frequency shifts. The cancellation of these shifts is controlled by interleaved single-pulse Rabi spectroscopy, which reduces the related relative frequency uncertainty to 1.1 ×10-18. To determine the frequency shift due to thermal radiation emitted by the ion's environment, we measure the static scalar differential polarizability of the E 3 transition as 0.888 (16 )×10-40 J m2/V2 and a dynamic correction η (300 K )=-0.0015 (7 ) . This reduces the uncertainty due to thermal radiation to 1.8 ×10-18. The residual motion of the ion yields the largest contribution (2.1 ×10-18 ) to the total systematic relative uncertainty of the clock of 3.2 ×10-18.
NASA Astrophysics Data System (ADS)
Lacerda, Márcio J.; Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2016-04-01
This paper presents a general framework to cope with full-order ? linear parameter-varying (LPV) filter design subject to inexactly measured parameters. The main novelty is the ability of handling additive and multiplicative uncertainties in the measurements, for both continuous and discrete-time LPV systems, in a unified approach. By conveniently modelling scheduling parameters and uncertainties affecting the measurements, the ? filter design problem can be expressed in terms of robust matrix inequalities that become linear when two scalar parameters are fixed. Therefore, the proposed conditions can be efficiently solved through linear matrix inequality relaxations based on polynomial solutions. Numerical examples are presented to illustrate the improved efficiency of the proposed approach when compared to other methods and, more important, its capability to deal with scenarios where the available strategies in the literature cannot be used.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
NASA Astrophysics Data System (ADS)
Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong
2016-05-01
Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.
Laperrière, Hélène
2007-01-01
Several years of professional nursing practices, while living in the poorest neighbourhoods in the outlying areas of Brazil's Amazon region, have led the author to develop a better understanding of marginalized populations. Providing care to people with leprosy and sex workers in riverside communities has taken place in conditions of uncertainty, insecurity, unpredictability and institutional violence. The question raised is how we can develop community health nursing practices in this context. A systematization of personal experiences based on popular education is used and analyzed as a way of learning by obtaining scientific knowledge through critical analysis of field practices. Ties of solidarity and belonging developed in informal, mutual-help action groups are promising avenues for research and the development of knowledge in health promotion, prevention and community care and a necessary contribution to national public health programmers. PMID:17934576
Systematic evaluation of an atomic clock at 2 × 10−18 total uncertainty
Nicholson, T.L.; Campbell, S.L.; Hutson, R.B.; Marti, G.E.; Bloom, B.J.; McNally, R.L.; Zhang, W.; Barrett, M.D.; Safronova, M.S.; Strouse, G.F.; Tew, W.L.; Ye, J.
2015-01-01
The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our 87Sr optical lattice clock now achieves fractional stability of 2.2 × 10−16 at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10−18 in fractional frequency units. PMID:25898253
Systematic evaluation of an atomic clock at 2 × 10(-18) total uncertainty.
Nicholson, T L; Campbell, S L; Hutson, R B; Marti, G E; Bloom, B J; McNally, R L; Zhang, W; Barrett, M D; Safronova, M S; Strouse, G F; Tew, W L; Ye, J
2015-01-01
The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our (87)Sr optical lattice clock now achieves fractional stability of 2.2 × 10(-16) at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10(-18) in fractional frequency units. PMID:25898253
Systematic Uncertainties in Characterizing Cluster Outskirts: The Case of Abell 133
NASA Astrophysics Data System (ADS)
Paine, Jennie; Ogrean, Georgiana A.; Nulsen, Paul; Farrah, Duncan
2016-01-01
The outskirts of galaxy clusters have low surface brightness compared to the X-ray background, making accurate background subtraction particularly important for analyzing cluster spectra out to and beyond the virial radius. We analyze the thermodynamic properties of the intracluster medium (ICM) of Abell 133 and assess the extent to which uncertainties on background subtraction affect measured quantities. We implement two methods of analyzing the ICM spectra: one in which the blank-sky background is subtracted, and another in which the sky background is modeled. We find that the two methods are consistent within the 90% confidence ranges. We were able to measure the thermodynamic properties of the cluster up to R500. Even at R500, the systematic uncertainties associated with the sky background in the direction of A133 are small, despite the ICM signal constituting only ~25% of the total signal. This work was supported in part by the NSF REU and DoD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution. GAO acknowledges support by NASA through a Hubble Fellowship grant HST-HF2-51345.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2011-12-01
This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.
NASA Astrophysics Data System (ADS)
Moore, Joseph Andrew
2011-12-01
External-beam radiotherapy is one of the primary methods for treating cancer. Typically a radiotherapy treatment course consists of radiation delivered to the patient in multiple daily treatment fractions over 6--8 weeks. Each fraction requires the patient to be aligned with the image acquired before the treatment course used in treatment planning. Unfortunately, patient alignment is not perfect and results in residual errors in patient setup. The standard technique for dealing with errors in patient setup is to expand the volume of the target by some margin to ensure the target receives the planned dose in the presence of setup errors. This work develops an alternative to margins for accommodating setup errors in the treatment planning process by directly including patient setup uncertainty in IMRT plan optimization. This probabilistic treatment planning (PTP) operates directly on the planning structure and develops a dose distribution robust to variations in the patient position. Two methods are presented. The first method includes only random setup uncertainty in the planning process by convolving the fluence of each beam with a Gaussian model of the distribution of random setup errors. The second method builds upon this by adding systematic uncertainty to optimization by way of a joint optimization over multiple probable patient positions. To assess the benefit of PTP methods, a PTP plan and a margin-based plan are developed for each of the 28 patients used in this study. Comparisons of plans show that PTP plans generally reduce the dose to normal tissues while maintaining a similar dose to the target structure when compared to margin-based plans. Physician assessment indicates that PTP plans are generally preferred over margin-based plans. PTP methods shows potential for improving patient outcome due to reduced complications associated with treatment.
Study of tracking efficiency and its systematic uncertainty from J/ψ → pp̅π+π- at BESIII
NASA Astrophysics Data System (ADS)
Wen-Long, Yuan; Xiao-Cong, Ai; Xiao-Bin, Ji; Shen-Jian, Chen; Yao, Zhang; Ling-Hui, Wu; Liang-Liang, Wang; Ye, Yuan
2016-02-01
Based on J/ψ events collected with the BESIII detector, with corresponding Monte Carlo samples, the tracking efficiency and its systematic uncertainty are studied using a control sample of J/ψ → pp̅π+π-. Validation methods and different factors influencing the tracking efficiency are presented in detail. The tracking efficiency and its systematic uncertainty for protons and pions with the transverse momentum and polar angle dependence are also discussed. Supported by Joint Funds of National Natural Science Foundation of China (U1232201), National Natural Science Foundation of China (11275210, 11205182, 11205184) and National Key Basic Research Program of China (2015CB856700)
NASA Astrophysics Data System (ADS)
Narayan, Amrendra
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (~1GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a
Narayan, Amrendra
2015-05-01
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a
An additional uncertainty of the throughput generated by the constant pressure gas flowmeter
NASA Astrophysics Data System (ADS)
Peksa, L.; Gronych, T.; Řepa, P.; Wild, J.; Tesař, J.; Pražák, D.; Krajíček, Z.; Vičar, M.
2008-03-01
The lower range limit of constant pressure gas flowmeters is about 10-8 Pa×m3/s. Detrimental gas throughputs caused by leaks and gassing from surfaces prevent from its decrease. Even if the flowmeter is entirely vacuum tight the throughput caused by the outgassing from surfaces can be sufficiently reduced only by pumping at elevated temperature. It can be performed with the flowmeters using directly driven bellows or diaphragm bellows in the volume displacers. Despite it, the lower range limit can hardly be decreased more than several ten times with up to now known designs. An additional uncertainty caused by the difference in pressure at the initial and final instant of measurement will increase at generating small throughputs to the extent that it will kill the measurement.
NASA Astrophysics Data System (ADS)
Cochran, J. R.; Tinto, K. J.; Elieff, S. H.; Bell, R. E.
2011-12-01
Airborne geophysical surveys in West Antarctica and Greenland carried out during Operation IceBridge (OIB) utilized the Sander Geophysics AIRGrav gravimeter, which collects high quality data during low-altitude, draped flights. This data has been used to determine bathymetry beneath ice shelves and floating ice tongues (e.g., Tinto et al, 2010, Cochran et al, 2010). This paper systematically investigates uncertainties arising from survey, instrumental and geologic constraints in this type of study and the resulting resolution of the bathymetry model. Gravity line data is low-pass filtered with time-based filters to remove high frequency noise. The spatial filter length is dependent on aircraft speed. For parameters used in OIB (70-140 s filters and 270-290 knots), spatial filter half-wavelengths are ~5-10 km. The half-wavelength does not define a lower limit to the width of feature that can be detected, but shorter wavelength features may appear wider with a lower amplitude. Resolution can be improved either by using a shorter filter or by flying slower. Both involve tradeoffs; a shorter filter allows more noise and slower speeds result in less coverage. These filters are applied along tracks, rather than in a region surrounding a measurement. In areas of large gravity relief, tracks in different directions can sample a very different range of gravity values within the length of the filter. We show that this can lead to crossover mismatches of >5 mGal, complicating interpretation. For dense surveys, gridding the data and then sampling the grid at the measurement points can minimize this effect. Resolution is also affected by the elevation of survey flights. For a distributed mass, the gravity amplitude decreases with distance and short-wavelength components attenuate faster. This is not a serious issue for OIB, which flew draped flights <500 m above the ice surface, but is a serious factor for gravimeters that require a constant elevation above the highest
Amoush, Ahmad; Abdel-Wahab, May; Abazeed, Mohamed; Xia, Ping
2015-01-01
The purpose of this study was to quantify the systematic uncertainties resulting from using free breathing computed tomography (FBCT) as a reference image for image-guided radiation therapy (IGRT) for patients with pancreatic tumors, and to quantify the associated dosimetric impact that resulted from using FBCT as reference for IGRT. Fifteen patients with implanted fiducial markers were selected for this study. For each patient, a FBCT and an average intensity projection computed tomography (AIP) created from four-dimensional computed tomography (4D CT) were acquired at the simulation. The treatment plan was created based on the FBCT. Seventy-five weekly kilovoltage (kV) cone-beam computed tomography (CBCT) images (five for each patient) were selected for this study. Bony alignment without rotation correction was performed 1) between the FBCT and CBCT, 2) between the AIP and CBCT, and 3) between the AIP and FBCT. The contours of the fiducials from the FBCT and AIP were transferred to the corresponding CBCT and were compared. Among the 75 CBCTs, 20 that had > 3 mm differences in centers of mass (COMs) in any directions between the FBCT and AIP were chosen for further dosimetric analysis. These COM discrepancies were converted into isocenter shifts in the corresponding planning FBCT, and dose was recalculated and compared to the initial FBCT plans. For the 75 CBCTs studied, the mean absolute differences in the COMs of the fiducial markers between the FBCT and CBCTs were 3.3 mm ± 2.5 mm, 3.5 mm ± 2.4 mm, and 5.8 mm ± 4.4 mm in the right-left (RL), anterior-posterior (AP), and superior-inferior (SI) directions, respectively. Between the AIP and CBCTs, the mean absolute differences were 3.2 mm ± 2.2mm, 3.3 mm ± 2.3 mm, and 6.3 mm ± 5.4 mm. The absolute mean discrepancies in these COMs shifts between FBCT/CBCT and AIP/CBCT were 1.1 mm ± 0.8 mm, 1.3 mm ± 0.9 mm, and 3.3 mm ± 2.6 mm in RL, AP, and SI, respectively. This represented a potential systematic error
NASA Astrophysics Data System (ADS)
Galli, Silvia; Slatyer, Tracy R.; Valdes, Marcos; Iocco, Fabio
2013-09-01
Anisotropies of the cosmic microwave background (CMB) have proven to be a very powerful tool to constrain dark matter annihilation at the epoch of recombination. However, CMB constraints are currently derived using a number of reasonable but yet untested assumptions that could potentially lead to a misestimation of the true bounds (or any reconstructed signal). In this paper we examine the potential impact of these systematic effects. In particular, we separately study the propagation of the secondary particles produced by annihilation in two energy regimes: first following the shower from the initial particle energy to the keV scale, and then tracking the resulting secondary particles from this scale to the absorption of their energy as heat, ionization, or excitation of the medium. We improve both the high- and low-energy parts of the calculation, in particular finding that our more accurate treatment of losses to sub-10.2 eV photons produced by scattering of high-energy electrons weakens the constraints on particular dark matter annihilation models by up to a factor of 2. On the other hand, we find that the uncertainties we examine for the low-energy propagation do not significantly affect the results for current and upcoming CMB data. We include the evaluation of the precise amount of excitation energy, in the form of Lyman-α photons, produced by the propagation of the shower, and examine the effects of varying the helium fraction and helium ionization fraction. In the recent literature, simple approximations for the fraction of energy absorbed in different channels have often been used to derive CMB constraints: we assess the impact of using accurate vs approximate energy fractions. Finally we check that the choice of recombination code (between RECFAST v1.5 and COSMOREC), to calculate the evolution of the free electron fraction in the presence of dark matter annihilation, introduces negligible differences.
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transversemore » momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light
Sensory uncertainty leads to systematic misperception of the direction of motion in depth.
Fulvio, Jacqueline M; Rosen, Monica L; Rokers, Bas
2015-07-01
Although we have made major advances in understanding motion perception based on the processing of lateral (2D) motion signals on computer displays, the majority of motion in the real (3D) world occurs outside of the plane of fixation, and motion directly toward or away from observers has particular behavioral relevance. Previous work has reported a systematic lateral bias in the perception of 3D motion, such that an object on a collision course with an observer's head is frequently judged to miss it, with obvious negative consequences. To better understand this bias, we systematically investigated the accuracy of 3D motion perception while manipulating sensory noise by varying the contrast of a moving target and its position in depth relative to fixation. Inconsistent with previous work, we found little bias under low sensory noise conditions. With increased sensory noise, however, we revealed a novel perceptual phenomenon: observers demonstrated a surprising tendency to confuse the direction of motion-in-depth, such that approaching objects were reported to be receding and vice versa. Subsequent analysis revealed that the lateral and motion-in-depth components of observers' reports are similarly affected, but that the effects on the motion-in-depth component (i.e., the motion-in-depth confusions) are much more apparent than those on the lateral component. In addition to revealing this novel visual phenomenon, these results shed new light on errors that can occur in motion perception and provide a basis for continued development of motion perception models. Finally, our findings suggest methods to evaluate the effectiveness of 3D visualization environments, such as 3D movies and virtual reality devices. PMID:25828462
Systematic uncertainties in RF-based measurement of superconducting cavity quality factors
NASA Astrophysics Data System (ADS)
Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.
2016-09-01
Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.
Systematic uncertainties in RF-based measurement of superconducting cavity quality factors
Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.
2016-05-10
Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Here, instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.
Juhasz, A.; Henning, Th.; Bouwman, J.; Dullemond, C. P.; Pascucci, I.; Apai, D.
2009-04-20
The spectral region around 10 {mu}m, showing prominent emission bands from various dust species is commonly used for the evaluation of the chemical composition of protoplanetary dust. Different methods of analysis have been proposed for this purpose, but so far, no comparative test has been performed to test the validity of their assumptions. In this paper, we evaluate how good the various methods are in deriving the chemical composition of dust grains from infrared spectroscopy. Synthetic spectra of disk models with different geometries and central sources were calculated, using a two-dimensional radiative transfer code. These spectra were then fitted in a blind test by four spectral decomposition methods. We studied the effect of disk structure (flared versus flat), inclination angle, size of an inner disk hole, and stellar luminosity on the fitted chemical composition. Our results show that the dust parameters obtained by all methods deviate systematically from the input data of the synthetic spectra. The dust composition fitted by the new two-layer temperature distribution method, described in this paper, differs the least from the input dust composition and the results show the weakest systematic effects. The reason for the deviations of the results given by the previously used methods lies in their simplifying assumptions. Due to the radial extent of the 10 {mu}m emitting region there is dust at different temperatures contributing to the flux in the silicate feature. Therefore, the assumption of a single averaged grain temperature can be a strong limitation of the previously used methods. The continuum below the feature can consist of multiple components (e.g., star, inner rim, and disk midplane), which cannot simply be described by a Planck function at a single temperature. In addition, the optically thin emission of 'featureless' grains (e.g., carbon in the considered wavelength range) produces a degeneracy in the models with the optically thick emission of
Reducing model uncertainty effects in flexible manipulators through the addition of passive damping
NASA Technical Reports Server (NTRS)
Alberts, T. E.
1987-01-01
An important issue in the control of practical systems is the effect of model uncertainty on closed loop performance. This is of particular concern when flexible structures are to be controlled, due to the fact that states associated with higher frequency vibration modes are truncated in order to make the control problem tractable. Digital simulations of a single-link manipulator system are employed to demonstrate that passive damping added to the flexible member reduces adverse effects associated with model uncertainty. A controller was designed based on a model including only one flexible mode. This controller was applied to larger order systems to evaluate the effects of modal truncation. Simulations using a Linear Quadratic Regulator (LQR) design assuming full state feedback illustrate the effect of control spillover. Simulations of a system using output feedback illustrate the destabilizing effect of observation spillover. The simulations reveal that the system with passive damping is less susceptible to these effects than the untreated case.
Enhanced flux pinning in MOCVD-YBCO films through Zr-additions:Systematic feasibility studies
Aytug, Tolga; Paranthaman, Mariappan Parans; Specht, Eliot D; Kim, Kyunghoon; Zhang, Yifei; Cantoni, Claudia; Zuev, Yuri L; Goyal, Amit; Christen, David K; Maroni, Victor A.
2009-01-01
Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.
Enhanced flux pinning in MOCVD-YBCO films through Zr additions : systematic feasibility studies.
Aytug, T.; Paranthaman, M.; Specht, E. D.; Zhang, Y.; Kim, K.; Zuev, Y. L.; Cantoni, C.; Goyal, A.; Christen, D. K.; Maroni, V. A.; Chen, Y.; Selvamanickam, V.; ORNL; SuperPower, Inc.
2010-01-01
Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.
Trapped ion 88Sr+ optical clock systematic uncertainties - AC Stark shift determination
NASA Astrophysics Data System (ADS)
Barwood, GP; Huang, G.; King, SA; Klein, HA; Gill, P.
2016-06-01
A recent comparison between two trapped-ion 88Sr+ optical clocks at the UK. National Physical Laboratory demonstrated agreement to 4 parts in 1017. One of the uncertainty contributions to the optical clock absolute frequency arises from the blackbody radiation shift which in turn depends on uncertainty in the knowledge of the differential polarisability between the two clocks states. Whilst a recent NRC measurement has determined the DC differential polarisability to high accuracy, there has been no experimental verification to date of the dynamic correction to the DC Stark shift. We report a measurement of the scalar AC Stark shift at 1064 nm with measurements planned at other wavelengths. Our preliminary result using a fibre laser at 1064 nm agrees with calculated values to within ∼3%.
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Ben Zvi, S. Y.; Bergman, D. R.; Blake, S. A.; Brusova, O.; Burt, G. W.; Cao, Z.; Connolly, B. C.; Deng, W.; Fedorova, Y.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Hughes, G. A.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; Zhang, X.
2007-06-01
We have studied several sources of systematic uncertainty in calculating the aperture of the High Resolution Fly’s Eye experiment (HiRes) in monocular mode, primarily as they affect the HiRes-II site. The energy dependent aperture is determined with detailed Monte Carlo simulations of the air showers and the detector response. We have studied the effects of changes to the input energy spectrum and composition used in the simulation. A realistic shape of the input spectrum is used in our analysis in order to avoid biases in the aperture estimate due to the limited detector resolution. We have examined the effect of exchanging our input spectrum with a simple E-3 power law in the “ankle” region. Uncertainties in the input composition are shown to be significant for energies below ˜1018 eV for data from the HiRes-II detector. Another source of uncertainties is the choice of the hadronic interaction model in the air shower generator. We compare the aperture estimate for two different models: QGSJet01 and SIBYLL 2.1. We also describe the implications of employing an atmospheric database with hourly measurements of the aerosol component, instead of using an average as has been used in our previously published measurements of the monocular spectra.
GARNATJE, TERESA; GARCIA, SÒNIA; VILATERSANA, ROSER; VALLÈS, JOAN
2006-01-01
• Background and Aims Plant genome size is an important biological characteristic, with relationships to systematics, ecology and distribution. Currently, there is no information regarding nuclear DNA content for any Carthamus species. In addition to improving the knowledge base, this research focuses on interspecific variation and its implications for the infrageneric classification of this genus. Genome size variation in the process of allopolyploid formation is also addressed. • Methods Nuclear DNA samples from 34 populations of 16 species of the genus Carthamus were assessed by flow cytometry using propidium iodide. • Key Results The 2C values ranged from 2·26 pg for C. leucocaulos to 7·46 pg for C. turkestanicus, and monoploid genome size (1Cx-value) ranged from 1·13 pg in C. leucocaulos to 1·53 pg in C. alexandrinus. Mean genome sizes differed significantly, based on sectional classification. Both allopolyploid species (C. creticus and C. turkestanicus) exhibited nuclear DNA contents in accordance with the sum of the putative parental C-values (in one case with a slight reduction, frequent in polyploids), supporting their hybrid origin. • Conclusions Genome size represents a useful tool in elucidating systematic relationships between closely related species. A considerable reduction in monoploid genome size, possibly due to the hybrid formation, is also reported within these taxa. PMID:16390843
No additional value of fusion techniques on anterior discectomy for neck pain: a systematic review.
van Middelkoop, Marienke; Rubinstein, Sidney M; Ostelo, Raymond; van Tulder, Maurits W; Peul, Wilco; Koes, Bart W; Verhagen, Arianne P
2012-11-01
We aimed to assess the effects of additional fusion on surgical interventions to the cervical spine for patients with neck pain with or without radiculopathy or myelopathy by performing a systematic review. The search strategy outlined by the Cochrane Back Review Group (CBRG) was followed. The primary search was conducted in MEDLINE, EMBASE, CINAHL, CENTRAL and PEDro up to June 2011. Only randomised, controlled trials of adults with neck pain that evaluated at least one clinically relevant primary outcome measure (pain, functional status, recovery) were included. Two authors independently assessed the risk of bias by using the criteria recommended by the CBRG and extracted the data. Data were pooled using a random effects model. The quality of the evidence was rated using the GRADE method. In total, 10 randomised, controlled trials were identified comparing additional fusion upon anterior decompression techniques, including 2 studies with a low risk of bias. Results revealed no clinically relevant differences in recovery: the pooled risk difference in the short-term follow-up was -0.06 (95% confidence interval -0.22 to 0.10) and -0.07 (95% confidence interval -0.14 to 0.00) in the long-term follow-up. Pooled risk differences for pain and return to work all demonstrated no differences. There is no additional benefit of fusion techniques applied within an anterior discectomy procedure on pain, recovery and return to work. PMID:22818181
A new approach to systematic uncertainties and self-consistency in helium abundance determinations
Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu
2010-05-01
Tests of big bang nucleosynthesis and early universe cosmology require precision measurements for helium abundance determinations. However, efforts to determine the primordial helium abundance via observations of metal poor H II regions have been limited by significant uncertainties (compared with the value inferred from BBN theory using the CMB determined value of the baryon density). This work builds upon previous work by providing an updated and extended program in evaluating these uncertainties. Procedural consistency is achieved by integrating the hydrogen based reddening correction with the helium based abundance calculation, i.e., all physical parameters are solved for simultaneously. We include new atomic data for helium recombination and collisional emission based upon recent work by Porter \\etal and wavelength dependent corrections to underlying absorption are investigated. The set of physical parameters has been expanded here to include the effects of neutral hydrogen collisional emission. It is noted that Hγ and Hδ allow better isolation of the collisional effects from the reddening. Because of a degeneracy between the solutions for density and temperature, the precision of the helium abundance determinations is limited. Also, at lower temperatures (T ∼< 13,000 K) the neutral hydrogen fraction is poorly constrained resulting in a larger uncertainty in the helium abundances. Thus, the derived errors on the helium abundances for individual objects are larger than those typical of previous studies. Seven previously analyzed, ''high quality'' H II region spectra are used for a primordial helium abundance determination. The updated emissivities and neutral hydrogen correction generally raise the abundance. From a regression to zero metallicity, we find Y{sub p} as 0.2561 ± 0.0108, in broad agreement with the WMAP result. Alternatively, a simple average of the data yields Y{sub p} 0.2566 ± 0.0028. Tests with synthetic data show a potential for distinct
Kulkarni, Sonali P; Shah, Kavita R; Sarma, Karthik V; Mahajan, Anish P
2013-06-01
Despite the HIV "test-and-treat" strategy's promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy's success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation. PMID:23597344
NASA Astrophysics Data System (ADS)
Reed, P. M.; Kollat, J. B.
2012-01-01
This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.
M dwarf metallicities and giant planet occurrence: Ironing out uncertainties and systematics
Gaidos, Eric; Mann, Andrew W.
2014-08-10
Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are –0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ∼1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.
NASA Astrophysics Data System (ADS)
Salaris, M.; Cassisi, S.
2007-01-01
Context: Age and metallicity estimates for extragalactic globular clusters, from integrated colour-colour diagrams, are examined. Aims: We investigate biases in cluster ages and [Fe/H] estimated from the (V-K)-(V-I) diagram, arising from inconsistent Horizontal Branch morphology, metal mixture, treatment of core convection between observed clusters and the theoretical colour grid employed for age and metallicity determinations. We also study the role played by statistical fluctuations of the observed colours, caused by the low total mass of typical globulars. Methods: Synthetic samples of globular cluster systems are created, by means of Monte-Carlo techniques. Each sample accounts for a different possible source of bias, among the ones addressed in this investigation. Cumulative age and [Fe/H] distributions are then retrieved by comparisons with a reference theoretical colour-colour grid, and analyzed. Results: Horizontal Branch morphology is potentially the largest source of uncertainty. A single-age system harbouring a large fraction of clusters with an HB morphology systematically bluer than the one accounted for in the theoretical colour grid, can simulate a bimodal population with an age difference as large as ~8 Gyr. When only the redder clusters are considered, this uncertainty is almost negligible, unless there is an extreme mass loss along the Red Giant Branch phase. The metal mixture affects mainly the redder clusters; the effect of colour fluctuations becomes negligible for the redder clusters, or when the integrated MV is brighter than ~-8.5 mag. The treatment of core convection is relevant for ages below ~4 Gyr. The retrieved cumulative [Fe/H] distributions are overall only mildly affected. Colour fluctuations and convective core extension have the largest effect. When 1σ photometric errors reach 0.10 mag, all biases found in our analysis are erased, and bimodal age populations with age differences of up to ~8 Gyr go undetected. The use of both (U
NASA Astrophysics Data System (ADS)
Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.
2015-12-01
Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
NASA Astrophysics Data System (ADS)
Anderson, Richard I.
2014-06-01
Context. Classical Cepheids are crucial calibrators of the extragalactic distance scale. The Baade-Wesselink technique can be used to calibrate Cepheid distances using Cepheids in the Galaxy and the Magellanic Clouds. Aims: I report the discovery of modulations in radial velocity (RV) curves of four Galactic classical Cepheids and investigate their impact as a systematic uncertainty for Baade-Wesselink distances. Methods: Highly precise Doppler measurements were obtained using the Coralie high-resolution spectrograph since 2011. Particular care was taken to sample all phase points in order to very accurately trace the RV curve during multiple epochs and to search for differences in linear radius variations derived from observations obtained at different epochs. Different timescales are sampled, ranging from cycle-to-cycle to months and years. Results: The unprecedented combination of excellent phase coverage obtained during multiple epochs and high precision enabled the discovery of significant modulation in the RV curves of the short-period s-Cepheids QZ Normae and V335 Puppis, as well as the long-period fundamental mode Cepheids ℓ Carinae and RS Puppis. The modulations manifest as shape and amplitude variations that vary smoothly on timescales of years for short-period Cepheids and from one pulsation cycle to the next in the long-period Cepheids. The order of magnitude of the effect ranges from several hundred m s-1 to a few km s-1. The resulting difference among linear radius variations derived using data from different epochs can lead to systematic errors of up to 15% for Baade-Wesselink-type distances, if the employed angular and linear radius variations are not determined contemporaneously. Conclusions: The different natures of the Cepheids exhibiting modulation in their RV curves suggests that this phenomenon is common. The observational baseline is not yet sufficient to conclude whether these modulations are periodic. To ensure the accuracy of Baade
Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro
2015-01-01
Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253
Keeling, V; Jin, H; Hossain, S; Ahmad, S; Ali, I
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) were registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0
2014-01-01
Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients
NASA Astrophysics Data System (ADS)
Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.
2015-08-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2009-12-01
We deal with the attempts to measure the Lense-Thirring effect with the Satellite Laser Ranging (SLR) technique applied to the existing LAGEOS and LAGEOS II terrestrial satellites and to the recently approved LARES spacecraft. According to general relativity, a central spinning body of mass M and angular momentum S like the Earth generates a gravitomagnetic field which induces small secular precessions of the orbit of a test particle geodesically moving around it. Extracting this signature from the data is a demanding task because of many classical orbital perturbations having the same pattern as the gravitomagnetic one, like those due to the centrifugal oblateness of the Earth which represents a major source of systematic bias. The first issue addressed here is: are the so far published evaluations of the systematic uncertainty induced by the bad knowledge of the even zonal harmonic coefficients J ℓ of the multipolar expansion of the Earth’s geopotential reliable and realistic? Our answer is negative. Indeed, if the differences Δ J ℓ among the even zonals estimated in different Earth’s gravity field global solutions from the dedicated GRACE mission are assumed for the uncertainties δ J ℓ instead of using their covariance sigmas σ_{J_{ell}} , it turns out that the systematic uncertainty δ μ in the Lense-Thirring test with the nodes Ω of LAGEOS and LAGEOS II may be up to 3 to 4 times larger than in the evaluations so far published (5-10%) based on the use of the sigmas of one model at a time separately. The second issue consists of the possibility of using a different approach in extracting the relativistic signature of interest from the LAGEOS-type data. The third issue is the possibility of reaching a realistic total accuracy of 1% with LAGEOS, LAGEOS II and LARES, which should be launched in November 2009 with a VEGA rocket. While LAGEOS and LAGEOS II fly at altitudes of about 6000 km, LARES will be likely placed at an altitude of 1450 km. Thus
Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele
2016-01-01
AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933
van Ochten, John; Luijsterburg, Pim A J; van Middelkoop, Marienke; Koes, Bart W; Bierma-Zeinstra, Sita M A
2010-01-01
Objective To summarise the effectiveness of adding supervised exercises to conventional treatment compared with conventional treatment alone in patients with acute lateral ankle sprains. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, Cinahl, and reference screening. Study selection Included studies were randomised controlled trials, quasi-randomised controlled trials, or clinical trials. Patients were adolescents or adults with an acute lateral ankle sprain. The treatment options were conventional treatment alone or conventional treatment combined with supervised exercises. Two reviewers independently assessed the risk of bias, and one reviewer extracted data. Because of clinical heterogeneity we analysed the data using a best evidence synthesis. Follow-up was classified as short term (up to two weeks), intermediate (two weeks to three months), and long term (more than three months). Results 11 studies were included. There was limited to moderate evidence to suggest that the addition of supervised exercises to conventional treatment leads to faster and better recovery and a faster return to sport at short term follow-up than conventional treatment alone. In specific populations (athletes, soldiers, and patients with severe injuries) this evidence was restricted to a faster return to work and sport only. There was no strong evidence of effectiveness for any of the outcome measures. Most of the included studies had a high risk of bias, with few having adequate statistical power to detect clinically relevant differences. Conclusion Additional supervised exercises compared with conventional treatment alone have some benefit for recovery and return to sport in patients with ankle sprain, though the evidence is limited or moderate and many studies are subject to bias. PMID:20978065
Systematics and limit calculations
Fisher, Wade; /Fermilab
2006-12-01
This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.
Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat
2015-01-01
Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395
Moore, Nicholas; Arnaud, Mickael; Robinson, Philip; Raschi, Emanuel; De Ponti, Fabrizio; Bégaud, Bernard; Pariente, Antoine
2016-01-01
Objective To quantify the risk of hypoglycaemia associated with the concomitant use of dipeptidyl peptidase-4 (DPP-4) inhibitors and sulphonylureas compared with placebo and sulphonylureas. Design Systematic review and meta-analysis. Data sources Medline, ISI Web of Science, SCOPUS, Cochrane Central Register of Controlled Trials, and clinicaltrial.gov were searched without any language restriction. Study selection Placebo controlled randomised trials comprising at least 50 participants with type 2 diabetes treated with DPP-4 inhibitors and sulphonylureas. Review methods Risk of bias in each trial was assessed using the Cochrane Collaboration tool. The risk ratio of hypoglycaemia with 95% confidence intervals was computed for each study and then pooled using fixed effect models (Mantel Haenszel method) or random effect models, when appropriate. Subgroup analyses were also performed (eg, dose of DPP-4 inhibitors). The number needed to harm (NNH) was estimated according to treatment duration. Results 10 studies were included, representing a total of 6546 participants (4020 received DPP-4 inhibitors plus sulphonylureas, 2526 placebo plus sulphonylureas). The risk ratio of hypoglycaemia was 1.52 (95% confidence interval 1.29 to 1.80). The NNH was 17 (95% confidence interval 11 to 30) for a treatment duration of six months or less, 15 (9 to 26) for 6.1 to 12 months, and 8 (5 to 15) for more than one year. In subgroup analysis, no difference was found between full and low doses of DPP-4 inhibitors: the risk ratio related to full dose DPP-4 inhibitors was 1.66 (1.34 to 2.06), whereas the increased risk ratio related to low dose DPP-4 inhibitors did not reach statistical significance (1.33, 0.92 to 1.94). Conclusions Addition of DPP-4 inhibitors to sulphonylurea to treat people with type 2 diabetes is associated with a 50% increased risk of hypoglycaemia and to one excess case of hypoglycaemia for every 17 patients in the first six months of treatment. This
Network planning under uncertainties
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Measurement Uncertainty and Probability
NASA Astrophysics Data System (ADS)
Willink, Robin
2013-02-01
Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A
2014-01-01
Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where
Kim, Mee J.; Findlay, Gregory M.; Martin, Beth; Zhao, Jingjing; Bell, Robert J. A.; Smith, Robin P.; Ku, Angel A.; Shendure, Jay; Ahituv, Nadav
2014-01-01
In addition to their protein coding function, exons can also serve as transcriptional enhancers. Mutations in these exonic-enhancers (eExons) could alter both protein function and transcription. However, the functional consequence of eExon mutations is not well known. Here, using massively parallel reporter assays, we dissect the enhancer activity of three liver eExons (SORL1 exon 17, TRAF3IP2 exon 2, PPARG exon 6) at single nucleotide resolution in the mouse liver. We find that both synonymous and non-synonymous mutations have similar effects on enhancer activity and many of the deleterious mutation clusters overlap known liver-associated transcription factor binding sites. Carrying a similar massively parallel reporter assay in HeLa cells with these three eExons found differences in their mutation profiles compared to the liver, suggesting that enhancers could have distinct operating profiles in different tissues. Our results demonstrate that eExon mutations could lead to multiple phenotypes by disrupting both the protein sequence and enhancer activity and that enhancers can have distinct mutation profiles in different cell types. PMID:25340400
Some Aspects of uncertainty in computational fluid dynamics results
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1991-01-01
Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.
Deriving uncertainty factors for threshold chemical contaminants in drinking water.
Ritter, Leonard; Totman, Céline; Krishnan, Kannan; Carrier, Richard; Vézina, Anne; Morisset, Véronique
2007-10-01
Uncertainty factors are used in the development of drinking-water guidelines to account for uncertainties in the database, including extrapolations of toxicity from animal studies and variability within humans, which result in some uncertainty about risk. The application of uncertainty factors is entrenched in toxicological risk assessment worldwide, but is not applied consistently. This report, prepared in collaboration with Health Canada, provides an assessment of the derivation of the uncertainty factor assumptions used in developing drinking-water quality guidelines for chemical contaminants. Assumptions used by Health Canada in the development of guidelines were compared to several other major regulatory jurisdictions. This assessment has revealed that uncertainty factor assumptions have been substantially influenced by historical practice. While the application of specific uncertainty factors appears to be well entrenched in regulatory practice, a well-documented and disciplined basis for the selection of these factors was not apparent in any of the literature supporting the default assumptions of Canada, the United States, Australia, or the World Health Organization. While there is a basic scheme used in most cases in developing drinking-water quality guidelines for nonthreshold contaminants by the jurisdictions included in this report, additional factors are sometimes included to account for other areas of uncertainty. These factors may include extrapolating subchronic data to anticipated chronic exposure, or use of a LOAEL instead of a NOAEL. The default value attributed to each uncertainty factor is generally a factor of 3 or 10; however, again, no comprehensive guidance to develop and apply these additional uncertainty factors was evident from the literature reviewed. A decision tree has been developed to provide guidance for selection of appropriate uncertainty factors, to account for the range of uncertainty encountered in the risk assessment process
NASA Astrophysics Data System (ADS)
Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.
2016-06-01
We consider the calculation of the pion-photon transition form factor Fγ*γπ0(Q2) within light-cone sum rules focusing attention to the low-mid region of momenta. The central aim is to estimate the theoretical uncertainties which originate from a wide variety of sources related to (i) the relevance of next-to-next-to-leading order radiative corrections (ii) the influence of the twist-four and the twist-six term (iii) the sensitivity of the results on auxiliary parameters, like the Borel scale M2, (iv) the role of the phenomenological description of resonances, and (v) the significance of a small but finite virtuality of the quasireal photon. Predictions for Fγ*γπ0(Q2) are presented which include all these uncertainties and found to comply within the margin of experimental error with the existing data in the Q2 range between 1 and 5 GeV2 , thus justifying the reliability of the applied calculational scheme. This provides a solid basis for confronting theoretical predictions with forthcoming data bearing small statistical errors.
Rashidi, Armin; DiPersio, John F; Sandmaier, Brenda M; Colditz, Graham A; Weisdorf, Daniel J
2016-06-01
Despite extensive research in the last few decades, progress in treatment of acute graft-versus-host disease (aGVHD), a common complication of allogeneic hematopoietic cell transplantation (HCT), has been limited and steroids continue to be the standard frontline treatment. Randomized clinical trials (RCTs) have failed to find a beneficial effect of escalating immunosuppression using additional agents. Considering the small number of RCTs, limited sample sizes, and frequent early termination because of anticipated futility, we conducted a systematic review and an aggregate data meta-analysis to explore whether a true efficacy signal has been missed because of the limitations of individual RCTs. Seven reports met our inclusion criteria. The control arm in all studies was 2 mg/kg/day prednisone (or equivalent). The additional agent(s) used in the experimental arm(s) were higher-dose steroids, antithymocyte globulin, infliximab, anti-interleukin-2 receptor antibody (daclizumab and BT563), CD5-specific immunotoxin, and mycophenolate mofetil. Random effects meta-analysis revealed no efficacy signal in pooled response rates at various times points. Overall survival at 100 days was significantly worse in the experimental arm (relative risk [RR], .83; 95% confidence interval [CI], .74 to .94; P = .004, data from 3 studies) and showed a similar trend (albeit not statistically significantly) at 1 year as well (RR, .86; 95% CI, .68 to 1.09; P = .21, data from 5 studies). In conclusion, these results argue against the value of augmented generic immunosuppression beyond steroids for frontline treatment of aGVHD and emphasize the importance of developing alternative strategies. Novel forms of immunomodulation and targeted therapies against non-immune-related pathways may enhance the efficacy of steroids in this setting, and early predictive and prognostic biomarkers can help identify the subgroup of patients who would likely need treatments other than (or in addition to
Murtagh, Fliss EM
2014-01-01
Background: Primary care has the potential to play significant roles in providing effective palliative care for non-cancer patients. Aim: To identify, critically appraise and synthesise the existing evidence on views on the provision of palliative care for non-cancer patients by primary care providers and reveal any gaps in the evidence. Design: Standard systematic review and narrative synthesis. Data sources: MEDLINE, Embase, CINAHL, PsycINFO, Applied Social Science Abstract and the Cochrane library were searched in 2012. Reference searching, hand searching, expert consultations and grey literature searches complemented these. Papers with the views of patients/carers or professionals on primary palliative care provision to non-cancer patients in the community were included. The amended Hawker’s criteria were used for quality assessment of included studies. Results: A total of 30 studies were included and represent the views of 719 patients, 605 carers and over 400 professionals. In all, 27 studies are from the United Kingdom. Patients and carers expect primary care physicians to provide compassionate care, have appropriate knowledge and play central roles in providing care. The roles of professionals are unclear to patients, carers and professionals themselves. Uncertainty of illness trajectory and lack of collaboration between health-care professionals were identified as barriers to effective care. Conclusions: Effective interprofessional work to deal with uncertainty and maintain coordinated care is needed for better palliative care provision to non-cancer patients in the community. Research into and development of a best model for effective interdisciplinary work are needed. PMID:24821710
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
Assessing uncertainty in physical constants
NASA Astrophysics Data System (ADS)
Henrion, Max; Fischhoff, Baruch
1986-09-01
Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.
Caputo, Carmela; Prior, David; Inder, Warrick J
2015-11-01
Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years. PMID:25466526
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Exploring Uncertainty with Projectile Launchers
ERIC Educational Resources Information Center
Orzel, Chad; Reich, Gary; Marr, Jonathan
2012-01-01
The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…
Uncertainties of modelling emissions from road transport
NASA Astrophysics Data System (ADS)
Kühlwein, J.; Friedrich, R.
To determine emission data from road transport, complex methods and models are applied. Emission data are characterized by a huge variety of source types as well as a high resolution of the spatial allocation and temporal variation. So far, the uncertainties of such calculated emission data have been largely unknown. As emission data is used to aid policy decisions, the accuracy of the data should be known. So, in the following, the determination of uncertainties of emission data is described. Using the IER emission model for generating regional or national emission data, the uncertainties of model input data and the total errors on different aggregation levels are exemplarily investigated for the pollutants NO x and NMHC in 1994 for the area of West Germany. The results of statistical error analysis carried out for annual emissions on road sections show variation coefficients (68.3% confidence interval) of 15-25%. In addition, systematic errors of common input data sets have been identified especially affecting emissions on motorway sections. The statistical errors of urban emissions with warm engine on town level amount to 35%. Therefore they are considerably higher than the errors outside towns. Error ranges of additional cold start emissions determined so far have been found in the same order. Additional uncertainties of temporally highly resolved (hourly) emission data depend strongly on the daytime, the weekday and the road category. Variation coefficients have been determined in the range between 10 and 70% for light-duty vehicles and between 15 and 100% for heavy-duty vehicles. All total errors determined here have to be regarded as lower limits of the real total errors.
Uncertainty quantification for proton-proton fusion in chiral effective field theory
NASA Astrophysics Data System (ADS)
Acharya, B.; Carlsson, B. D.; Ekström, A.; Forssén, C.; Platter, L.
2016-09-01
We compute the S-factor of the proton-proton (pp) fusion reaction using chiral effective field theory (χEFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the pp cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of χEFT, (iii) the systematic uncertainty due to the χEFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon-nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold S-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent S-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the statistical correlations between the S-factor and few-nucleon observables such as the binding energies and point-proton radii of 2,3H and 3He as well as the D-state probability and quadrupole moment of 2H, and the β-decay of 3H. We find that, with the state-of-the-art optimization of the nuclear Hamiltonian, the statistical uncertainty in the threshold S-factor cannot be reduced beyond 0.7%.
Known and unknown unknowns: uncertainty estimation in satellite remote sensing
NASA Astrophysics Data System (ADS)
Povey, A. C.; Grainger, R. G.
2015-11-01
This paper discusses a best-practice representation of uncertainty in satellite remote sensing data. An estimate of uncertainty is necessary to make appropriate use of the information conveyed by a measurement. Traditional error propagation quantifies the uncertainty in a measurement due to well-understood perturbations in a measurement and in auxiliary data - known, quantified "unknowns". The under-constrained nature of most satellite remote sensing observations requires the use of various approximations and assumptions that produce non-linear systematic errors that are not readily assessed - known, unquantifiable "unknowns". Additional errors result from the inability to resolve all scales of variation in the measured quantity - unknown "unknowns". The latter two categories of error are dominant in under-constrained remote sensing retrievals, and the difficulty of their quantification limits the utility of existing uncertainty estimates, degrading confidence in such data. This paper proposes the use of ensemble techniques to present multiple self-consistent realisations of a data set as a means of depicting unquantified uncertainties. These are generated using various systems (different algorithms or forward models) believed to be appropriate to the conditions observed. Benefiting from the experience of the climate modelling community, an ensemble provides a user with a more complete representation of the uncertainty as understood by the data producer and greater freedom to consider different realisations of the data.
Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I
2016-01-01
Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy. PMID:10174798
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
NASA Astrophysics Data System (ADS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-03-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found.
Uncertainty quantification of effective nuclear interactions
NASA Astrophysics Data System (ADS)
Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz
2016-03-01
We give a brief review on the development of phenomenological NN interactions and the corresponding quantification of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean field calculations through the Skyrme parameters and effective field theory counterterms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different fitting strategies on the light of recent developments.
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Asymptotic entropic uncertainty relations
NASA Astrophysics Data System (ADS)
Adamczak, Radosław; Latała, Rafał; Puchała, Zbigniew; Życzkowski, Karol
2016-03-01
We analyze entropic uncertainty relations for two orthogonal measurements on a N-dimensional Hilbert space, performed in two generic bases. It is assumed that the unitary matrix U relating both bases is distributed according to the Haar measure on the unitary group. We provide lower bounds on the average Shannon entropy of probability distributions related to both measurements. The bounds are stronger than those obtained with use of the entropic uncertainty relation by Maassen and Uffink, and they are optimal up to additive constants. We also analyze the case of a large number of measurements and obtain strong entropic uncertainty relations, which hold with high probability with respect to the random choice of bases. The lower bounds we obtain are optimal up to additive constants and allow us to prove a conjecture by Wehner and Winter on the asymptotic behavior of constants in entropic uncertainty relations as the dimension tends to infinity. As a tool we develop estimates on the maximum operator norm of a submatrix of a fixed size of a random unitary matrix distributed according to the Haar measure, which are of independent interest.
Oldgren, Jonas; Wallentin, Lars; Alexander, John H.; James, Stefan; Jönelid, Birgitta; Steg, Gabriel; Sundström, Johan
2013-01-01
Background Oral anticoagulation in addition to antiplatelet treatment after an acute coronary syndrome might reduce ischaemic events but increase bleeding risk. We performed a meta-analysis to evaluate the efficacy and safety of adding direct thrombin or factor-Xa inhibition by any of the novel oral anticoagulants (apixaban, dabigatran, darexaban, rivaroxaban, and ximelagatran) to single (aspirin) or dual (aspirin and clopidogrel) antiplatelet therapy in this setting. Methods and results All seven published randomized, placebo-controlled phase II and III studies of novel oral anticoagulants in acute coronary syndromes were included. The database consisted of 30 866 patients, 4135 (13.4%) on single, and 26 731 (86.6%) on dual antiplatelet therapy, with a non-ST- or ST-elevation acute coronary syndrome within the last 7–14 days. We defined major adverse cardiovascular events (MACEs) as the composite of all-cause mortality, myocardial infarction, or stroke; and clinically significant bleeding as the composite of major and non-major bleeding requiring medical attention according to the study definitions. When compared with aspirin alone the combination of an oral anticoagulant and aspirin reduced the incidence of MACE [hazard ratio (HR) and 95% confidence interval 0.70; 0.59–0.84], but increased clinically significant bleeding (HR: 1.79; 1.54–2.09). Compared with dual antiplatelet therapy with aspirin and clopidogrel, adding an oral anticoagulant decreased the incidence of MACE modestly (HR: 0.87; 0.80–0.95), but more than doubled the bleeding (HR: 2.34; 2.06–2.66). Heterogeneity between studies was low, and results were similar when restricting the analysis to phase III studies. Conclusion In patients with a recent acute coronary syndrome, the addition of a new oral anticoagulant to antiplatelet therapy results in a modest reduction in cardiovascular events but a substantial increase in bleeding, most pronounced when new oral anticoagulants are combined with
Ahmadizar, Fariba; Onland-Moret, N. Charlotte; de Boer, Anthonius; Liu, Geoffrey; Maitland-van der Zee, Anke H.
2015-01-01
Aim To evaluate the efficacy and safety of bevacizumab in the adjuvant cancer therapy setting within different subset of patients. Methods & Design/ Results PubMed, EMBASE, Cochrane and Clinical trials.gov databases were searched for English language studies of randomized controlled trials comparing bevacizumab and adjuvant therapy with adjuvant therapy alone published from January 1966 to 7th of May 2014. Progression free survival, overall survival, overall response rate, safety and quality of life were analyzed using random- or fixed-effects models according to the PRISMA guidelines. We obtained data from 44 randomized controlled trials (30,828 patients). Combining bevacizumab with different adjuvant therapies resulted in significant improvement of progression free survival (log hazard ratio, 0.87; 95% confidence interval (CI), 0.84–0.89), overall survival (log hazard ratio, 0.96; 95% CI, 0.94–0.98) and overall response rate (relative risk, 1.46; 95% CI: 1.33–1.59) compared to adjuvant therapy alone in all studied tumor types. In subgroup analyses, there were no interactions of bevacizumab with baseline characteristics on progression free survival and overall survival, while overall response rate was influenced by tumor type and bevacizumab dose (p-value: 0.02). Although bevacizumab use resulted in additional expected adverse drug reactions except anemia and fatigue, it was not associated with a significant decline in quality of life. There was a trend towards a higher risk of several side effects in patients treated by high-dose bevacizumab compared to the low-dose e.g. all grade proteinuria (9.24; 95% CI: 6.60–12.94 vs. 2.64; 95% CI: 1.29–5.40). Conclusions Combining bevacizumab with different adjuvant therapies provides a survival benefit across all major subsets of patients, including by tumor type, type of adjuvant therapy, and duration and dose of bevacizumab therapy. Though bevacizumab was associated with increased risks of some adverse drug
Uncertainties in offsite consequence analysis
Young, M.L.; Harper, F.T.; Lui, C.H.
1996-03-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
Uncertainties in radiation flow experiments
NASA Astrophysics Data System (ADS)
Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.
2016-03-01
Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
Uncertainty Estimation in Intensity-Modulated Radiotherapy Absolute Dosimetry Verification
Sanchez-Doblado, Francisco . E-mail: paco@us.es; Hartmann, Guenther H.; Pena, Javier; Capote, Roberto; Paiusco, Marta; Rhein, Bernhard; Leal, Antonio; Lagares, Juan Ignacio
2007-05-01
Purpose: Intensity-modulated radiotherapy (IMRT) represents an important method for improving RT. The IMRT relative dosimetry checks are well established; however, open questions remain in reference dosimetry with ionization chambers (ICs). The main problem is the departure of the measurement conditions from the reference ones; thus, additional uncertainty is introduced into the dose determination. The goal of this study was to assess this effect systematically. Methods and Materials: Monte Carlo calculations and dosimetric measurements with five different detectors were performed for a number of representative IMRT cases, covering both step-and-shoot and dynamic delivery. Results: Using ICs with volumes of about 0.125 cm{sup 3} or less, good agreement was observed among the detectors in most of the situations studied. These results also agreed well with the Monte Carlo-calculated nonreference correction factors (c factors). Additionally, we found a general correlation between the IC position relative to a segment and the derived correction factor c, which can be used to estimate the expected overall uncertainty of the treatment. Conclusion: The increase of the reference dose relative standard uncertainty measured with ICs introduced by nonreference conditions when verifying an entire IMRT plan is about 1-1.5%, provided that appropriate small-volume chambers are used. The overall standard uncertainty of the measured IMRT dose amounts to about 2.3%, including the 0.5% of reproducibility and 1.5% of uncertainty associated with the beam calibration factor. Solid state detectors and large-volume chambers are not well suited to IMRT verification dosimetry because of the greater uncertainties. An action level of 5% is appropriate for IMRT verification. Greater discrepancies should lead to a review of the dosimetric procedure, including visual inspection of treatment segments and energy fluence.
The relationship between aerosol model uncertainty and radiative forcing uncertainty
NASA Astrophysics Data System (ADS)
Carslaw, Ken; Lee, Lindsay; Reddington, Carly
2016-04-01
There has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated aerosol-cloud forcing between pre-industrial and present day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the pre-industrial aerosol state. But the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are "equally acceptable" compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty, but this hides a range of very different aerosol models. These multiple so-called "equifinal" model variants predict a wide range of forcings. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.
Uncertainty-induced quantum nonlocality
NASA Astrophysics Data System (ADS)
Wu, Shao-xiong; Zhang, Jun; Yu, Chang-shui; Song, He-shan
2014-01-01
Based on the skew information, we present a quantity, uncertainty-induced quantum nonlocality (UIN) to measure the quantum correlation. It can be considered as the updated version of the original measurement-induced nonlocality (MIN) preserving the good computability but eliminating the non-contractivity problem. For 2×d-dimensional state, it is shown that UIN can be given by a closed form. In addition, we also investigate the maximal uncertainty-induced nonlocality.
Messaging climate change uncertainty
NASA Astrophysics Data System (ADS)
Cooke, Roger M.
2015-01-01
Climate change is full of uncertainty and the messengers of climate science are not getting the uncertainty narrative right. To communicate uncertainty one must first understand it, and then avoid repeating the mistakes of the past.
Uncertainty quantified trait predictions
NASA Astrophysics Data System (ADS)
Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter
2015-04-01
Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Communicating uncertainties in earth sciences in view of user needs
NASA Astrophysics Data System (ADS)
de Vries, Wim; Kros, Hans; Heuvelink, Gerard
2014-05-01
uncertain model parameters (parametric variability). These uncertainties can be quantified by uncertainty propagation methods such as Monte Carlo simulation methods. Examples of intrinsic uncertainties that generally cannot be expressed in mathematical terms are errors or biases in: • Results of experiments and observations due to inadequate sampling and errors in analyzing data in the laboratory and even in data reporting. • Results of (laboratory) experiments that are limited to a specific domain or performed under circumstances that differ from field circumstances. • Model structure, due to lack of knowledge of the underlying processes. Structural uncertainty, which may cause model inadequacy/ bias, is inherent in model approaches since models are approximations of reality. Intrinsic uncertainties often occur in an emerging field where ongoing new findings, either experiments or field observations of new model findings, challenge earlier work. In this context, climate scientists working within the IPCC have adopted a lexicon to communicate confidence in their findings, ranging from "very high", "high", "medium", "low" and "very low" confidence. In fact, there are also statistical methods to gain insight in uncertainties in model predictions due to model assumptions (i.e. model structural error). Examples are comparing model results with independent observations or a systematic intercomparison of predictions from multiple models. In the latter case, Bayesian model averaging techniques can be used, in which each model considered gets an assigned prior probability of being the 'true' model. This approach works well with statistical (regression) models, but extension to physically-based models is cumbersome. An alternative is the use of state-space models in which structural errors are represent as (additive) noise terms. In this presentation, we focus on approaches that are relevant at the science - policy interface, including multiple scientific disciplines and
Validation of an Experimentally Derived Uncertainty Model
NASA Technical Reports Server (NTRS)
Lim, K. B.; Cox, D. E.; Balas, G. J.; Juang, J.-N.
1996-01-01
The results show that uncertainty models can be obtained directly from system identification data by using a minimum norm model validation approach. The error between the test data and an analytical nominal model is modeled as a combination of unstructured additive and structured input multiplicative uncertainty. Robust controllers which use the experimentally derived uncertainty model show significant stability and performance improvements over controllers designed with assumed ad hoc uncertainty levels. Use of the identified uncertainty model also allowed a strong correlation between design predictions and experimental results.
Simple uncertainty propagation for early design phase aircraft sizing
NASA Astrophysics Data System (ADS)
Lenz, Annelise
Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the
Finite Frames and Graph Theoretic Uncertainty Principles
NASA Astrophysics Data System (ADS)
Koprowski, Paul J.
The subject of analytical uncertainty principles is an important field within harmonic analysis, quantum physics, and electrical engineering. We explore uncertainty principles in the context of the graph Fourier transform, and we prove additive results analogous to the multiplicative version of the classical uncertainty principle. We establish additive uncertainty principles for finite Parseval frames. Lastly, we examine the feasibility region of simultaneous values of the norms of a graph differential operator acting on a function f ∈ l2(G) and its graph Fourier transform.
Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.
2015-01-01
This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108
Neutrino Spectra and Uncertainties for MINOS
Kopp, Sacha
2008-02-21
The MINOS experiment at Fermilab has released an updated result on muon disappearance. The experiment utilizes the intense source of muon neutrinos provided by the NuMI beam line. This note summarizes the systematic uncertainties in the experiment's knowledge of the flux and energy spectrum of the neutrinos from NuMI.
An Approach of Uncertainty Evaluation for Thermal-Hydraulic Analysis
Katsunori Ogura; Hisashi Ninokata
2002-07-01
An approach to evaluate uncertainty systematically for thermal-hydraulic analysis programs is demonstrated. The approach is applied to the Peach Bottom Unit 2 Turbine Trip 2 Benchmark and is validated. (authors)
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Identifying uncertainties in Arctic climate change projections
NASA Astrophysics Data System (ADS)
Hodson, Daniel L. R.; Keeley, Sarah P. E.; West, Alex; Ridley, Jeff; Hawkins, Ed; Hewitt, Helene T.
2013-06-01
Wide ranging climate changes are expected in the Arctic by the end of the 21st century, but projections of the size of these changes vary widely across current global climate models. This variation represents a large source of uncertainty in our understanding of the evolution of Arctic climate. Here we systematically quantify and assess the model uncertainty in Arctic climate changes in two CO2 doubling experiments: a multimodel ensemble (CMIP3) and an ensemble constructed using a single model (HadCM3) with multiple parameter perturbations (THC-QUMP). These two ensembles allow us to assess the contribution that both structural and parameter variations across models make to the total uncertainty and to begin to attribute sources of uncertainty in projected changes. We find that parameter uncertainty is an major source of uncertainty in certain aspects of Arctic climate. But also that uncertainties in the mean climate state in the 20th century, most notably in the northward Atlantic ocean heat transport and Arctic sea ice volume, are a significant source of uncertainty for projections of future Arctic change. We suggest that better observational constraints on these quantities will lead to significant improvements in the precision of projections of future Arctic climate change.
Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F
2016-01-01
Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of
Direct Aerosol Forcing Uncertainty
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
Universal Uncertainty Relations
NASA Astrophysics Data System (ADS)
Gour, Gilad
2014-03-01
Uncertainty relations are a distinctive characteristic of quantum theory that imposes intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring non-commuting observables. However, I will show here that there is no fundamental reason for using entropies as quantifiers; in fact, any functional relation that characterizes the uncertainty of the measurement outcomes can be used to define an uncertainty relation. Starting from a simple assumption that any measure of uncertainty is non-decreasing under mere relabeling of the measurement outcomes, I will show that Schur-concave functions are the most general uncertainty quantifiers. I will then introduce a novel fine-grained uncertainty relation written in terms of a majorization relation, which generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary measures of uncertainty. This infinite family of uncertainty relations includes all the known entropic uncertainty relations, but is not limited to them. In this sense, the relation is universally valid and captures the essence of the uncertainty principle in quantum theory. This talk is based on a joint work with Shmuel Friedland and Vlad Gheorghiu. This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada and by the Pacific Institute for Mathematical Sciences (PIMS).
Fission Spectrum Related Uncertainties
G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores
2007-10-01
The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.
Two basic Uncertainty Relations in Quantum Mechanics
Angelow, Andrey
2011-04-07
In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schroedinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.
Two basic Uncertainty Relations in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Angelow, Andrey
2011-04-01
In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schrödinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...
Uncertainty and Sensitivity Analyses Plan
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Pore Velocity Estimation Uncertainties
NASA Astrophysics Data System (ADS)
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Uncertainty compliant design flood estimation
NASA Astrophysics Data System (ADS)
Botto, A.; Ganora, D.; Laio, F.; Claps, P.
2014-05-01
Hydraulic infrastructures are commonly designed with reference to target values of flood peak, estimated using probabilistic techniques, such as flood frequency analysis. The application of these techniques underlies levels of uncertainty, which are sometimes quantified but normally not accounted for explicitly in the decision regarding design discharges. The present approach aims at defining a procedure which enables the definition of Uncertainty Compliant Design (UNCODE) values of flood peaks. To pursue this goal, we first demonstrate the equivalence of the Standard design based on the return period and the cost-benefit procedure, when linear cost and damage functions are used. We then use this result to assign an expected cost to estimation errors, thus setting a framework to obtain a design flood estimator which minimizes the total expected cost. This procedure properly accounts for the uncertainty which is inherent in the frequency curve estimation. Applications of the UNCODE procedure to real cases leads to remarkable displacement of the design flood from the Standard values. UNCODE estimates are systematically larger than the Standard ones, with substantial differences (up to 55%) when large return periods or short data samples are considered.
Uncertainty and Cognitive Control
Mushtaq, Faisal; Bland, Amy R.; Schaefer, Alexandre
2011-01-01
A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the “need for control”; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders. PMID:22007181
Reducing Zero-point Systematics in Dark Energy Supernova Experiments
Faccioli, Lorenzo; Kim, Alex G; Miquel, Ramon; Bernstein, Gary; Bonissent, Alain; Brown, Matthew; Carithers, William; Christiansen, Jodi; Connolly, Natalia; Deustua, Susana; Gerdes, David; Gladney, Larry; Kushner, Gary; Linder, Eric; McKee, Shawn; Mostek, Nick; Shukla, Hemant; Stebbins, Albert; Stoughton, Chris; Tucker, David
2011-04-01
We study the effect of filter zero-point uncertainties on future supernova dark energy missions. Fitting for calibration parameters using simultaneous analysis of all Type Ia supernova standard candles achieves a significant improvement over more traditional fit methods. This conclusion is robust under diverse experimental configurations (number of observed supernovae, maximum survey redshift, inclusion of additional systematics). This approach to supernova fitting considerably eases otherwise stringent mission cali- bration requirements. As an example we simulate a space-based mission based on the proposed JDEM satellite; however the method and conclusions are general and valid for any future supernova dark energy mission, ground or space-based.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Quantifying uncertainty from material inhomogeneity.
Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee
2009-09-01
Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the
Systematic reviews need systematic searchers
McGowan, Jessie; Sampson, Margaret
2005-01-01
Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.
Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-09-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; McMillan, H. K.
2015-04-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, including for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Uncertainty of decibel levels.
Taraldsen, Gunnar; Berge, Truls; Haukland, Frode; Lindqvist, Bo Henry; Jonasson, Hans
2015-09-01
The mean sound exposure level from a source is routinely estimated by the mean of the observed sound exposures from repeated measurements. A formula for the standard uncertainty based on the Guide to the expression of Uncertainty in Measurement (GUM) is derived. An alternative formula is derived for the case where the GUM method fails. The formulas are applied on several examples, and compared with a Monte Carlo calculation of the standard uncertainty. The recommended formula can be seen simply as a convenient translation of the uncertainty on an energy scale into the decibel level scale, but with a theoretical foundation. PMID:26428824
Uncertainty in hydrological signatures
NASA Astrophysics Data System (ADS)
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Bartine, D.E.; Cacuci, D.G.
1983-09-13
This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)
Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...
Saccade Adaptation and Visual Uncertainty
Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.
2016-01-01
Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635
Attitudes toward Others Depend upon Self and Other Causal Uncertainty
Tobin, Stephanie J.; Osika, Matylda M.; McLanders, Mia
2014-01-01
People who are high in causal uncertainty doubt their own ability to understand the causes of social events. In three studies, we examined the effects of target and perceiver causal uncertainty on attitudes toward the target. Target causal uncertainty was manipulated via responses on a causal uncertainty scale in Studies 1 and 2, and with a scenario in Study 3. In Studies 1 and 2, we found that participants liked the low causal uncertainty target more than the high causal uncertainty target. This preference was stronger for low relative to high causal uncertainty participants because high causal uncertainty participants held more uncertain ideals. In Study 3, we examined the value individuals place upon causal understanding (causal importance) as an additional moderator. We found that regardless of their own causal uncertainty level, participants who were high in causal importance liked the low causal uncertainty target more than the high causal uncertainty target. However, when participants were low in causal importance, low causal uncertainty perceivers showed no preference and high causal uncertainty perceivers preferred the high causal uncertainty target. These findings reveal that goal importance and ideals can influence how perceivers respond to causal uncertainty in others. PMID:24504048
MOUSE UNCERTAINTY ANALYSIS SYSTEM
The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cost or risk analysis equations. t was especially intended for use by individuals with li...
Electoral Knowledge and Uncertainty.
ERIC Educational Resources Information Center
Blood, R. Warwick; And Others
Research indicates that the media play a role in shaping the information that voters have about election options. Knowledge of those options has been related to actual vote, but has not been shown to be strongly related to uncertainty. Uncertainty, however, does seem to motivate voters to engage in communication activities, some of which may…
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
Physical Uncertainty Bounds (PUB)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Economic uncertainty and econophysics
NASA Astrophysics Data System (ADS)
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
NASA Astrophysics Data System (ADS)
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits
Dankovic, D. A.; Naumann, B. D.; Maier, A.; Dourson, M. L.; Levy, L. S.
2015-01-01
The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties—typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also
Assessing MODIS Macrophysical Cloud Property Uncertainties
NASA Astrophysics Data System (ADS)
Maddux, B. C.; Ackerman, S. A.; Frey, R.; Holz, R.
2013-12-01
Cloud, being multifarious and ephemeral, is difficult to observe and quantify in a systematic way. Even basic terminology used to describe cloud observations is fraught with ambiguity in the scientific literature. Any observational technique, method, or platform will contain inherent and unavoidable measurement uncertainties. Quantifying these uncertainties in cloud observations is a complex task that requires an understanding of all aspects of the measurement. We will use cloud observations obtained from the Moderate Resolution Imaging Spectroradiameter(MODIS) to obtain metrics of the uncertainty of its cloud observations. Our uncertainty analyses will contain two main components, 1) an attempt to create a bias or uncertainty with respect to active measurements from CALIPSO and 2) a relative uncertainty within the MODIS cloud climatologies themselves. Our method will link uncertainty to the physical observation and its environmental/scene characteristics. Our aim is to create statistical uncertainties that are based on the cloud observational values, satellite view geometry, surface type, etc, for cloud amount and cloud top pressure. The MODIS instruments on the NASA Terra and Aqua satellites provide observations over a broad spectral range (36 bands between 0.415 and 14.235 micron) and high spatial resolution (250 m for two bands, 500 m for five bands, 1000 m for 29 bands), which the MODIS cloud mask algorithm (MOD35) utilizes to provide clear/cloud determinations over a wide array of surface types, solar illuminations and view geometries. For this study we use the standard MODIS products, MOD03, MOD06 and MOD35, all of which were obtained from the NASA Level 1 and Atmosphere Archive and Distribution System.
Uncertainty of upland soil carbon sink estimate for Finland
NASA Astrophysics Data System (ADS)
Lehtonen, Aleksi; Heikkinen, Juha
2016-04-01
Changes in the soil carbon stock of Finnish upland soils were quantified using forest inventory data, forest statistics, biomass models, litter turnover rates, and the Yasso07 soil model. Uncertainty in the estimated stock changes was assessed by combining model and sampling errors associated with the various data sources into variance-covariance matrices that allowed computationally efficient error propagation in the context of Yasso07 simulations. In sensitivity analysis, we found that the uncertainty increased drastically as a result of adding random year-to-year variation to the litter input. Such variation is smoothed out when using periodic inventory data with constant biomass models and turnover rates. Model errors (biomass, litter, understorey vegetation) and the systematic error of total drain had a marginal effect on the uncertainty regarding soil carbon stock change. Most of the uncertainty appears to be related to uncaptured annual variation in litter amounts. This is due to fact that variation in the slopes of litter input trends dictates the uncertainty of soil carbon stock change. If we assume that there is annual variation only in foliage and fine root litter rates and that this variation is less than 10% from year to year, then we can claim that Finnish upland forest soils have accumulated carbon during the first Kyoto period (2008-2012). The results of the study underline superiority of permanent sample plots compared to temporary ones, when soil model litter input trends have been estimated from forest inventory data. In addition, we also found that the use of IPCC guidelines leads to underestimation of the uncertainty of soil carbon stock change. This underestimation of the error results from the guidance to remove inter-annual variation from the model inputs, here illustrated with constant litter life spans. Model assumptions and model input estimation should be evaluated critically, when GHG-inventory results are used for policy planning
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-07-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
NASA Astrophysics Data System (ADS)
Povey, A. C.; Grainger, R. G.
2015-08-01
This paper discusses a best-practice representation of uncertainty in satellite remote sensing data. An estimate of uncertainty is necessary to make appropriate use of the information conveyed by a measurement. Traditional error propagation quantifies the uncertainty in a measurement due to well-understood perturbations in a measurement and auxiliary data - known, quantified "unknowns". The underconstrained nature of most satellite remote sensing observations requires the use of various approximations and assumptions that produce non-linear systematic errors that are not readily assessed - known, unquantifiable "unknowns". Additional errors result from the inability to resolve all scales of variation in the measured quantity - unknown "unknowns". The latter two categories of error are dominant in underconstrained remote sensing retrievals and the difficulty of their quantification limits the utility of existing uncertainty estimates, degrading confidence in such data. This paper proposes the use of ensemble techniques to present multiple self-consistent realisations of a data set as a means of depicting unquantified uncertainties. These are generated using various systems (different algorithms or forward models) believed to be appropriate to the conditions observed. Benefiting from the experience of the climate modelling community, an ensemble provides a user with a more complete representation of the uncertainty as understood by the data producer and greater freedom to consider different realisations of the data.
Uncertainty Evaluation of the Diffusive Gradients in Thin Films Technique
2015-01-01
Although the analytical performance of the diffusive gradients in thin films (DGT) technique is well investigated, there is no systematic analysis of the DGT measurement uncertainty and its sources. In this study we determine the uncertainties of bulk DGT measurements (not considering labile complexes) and of DGT-based chemical imaging using laser ablation - inductively coupled plasma mass spectrometry. We show that under well-controlled experimental conditions the relative combined uncertainties of bulk DGT measurements are ∼10% at a confidence interval of 95%. While several factors considerably contribute to the uncertainty of bulk DGT, the uncertainty of DGT LA-ICP-MS mainly depends on the signal variability of the ablation analysis. The combined uncertainties determined in this study support the use of DGT as a monitoring instrument. It is expected that the analytical requirements of legal frameworks, for example, the EU Drinking Water Directive, are met by DGT sampling. PMID:25579402
NASA Astrophysics Data System (ADS)
Carpenter, Kenneth; Currie, Philip J.
1992-07-01
In recent years dinosaurs have captured the attention of the public at an unprecedented level. At the heart of this resurgence in popular interest is an increased level of research activity, much of which is innovative in the field of paleontology. For instance, whereas earlier paleontological studies emphasized basic morphologic description and taxonomic classification, modern studies attempt to examine the role and nature of dinosaurs as living animals. More than ever before, we understand how these extinct species functioned, behaved, interacted with each other and the environment, and evolved. Nevertheless, these studies rely on certain basic building blocks of knowledge, including facts about dinosaur anatomy and taxonomic relationships. One of the purposes of this volume is to unravel some of the problems surrounding dinosaur systematics and to increase our understanding of dinosaurs as a biological species. Dinosaur Systematics presents a current overview of dinosaur systematics using various examples to explore what is a species in a dinosaur, what separates genders in dinosaurs, what morphological changes occur with maturation of a species, and what morphological variations occur within a species.
Extended uncertainty from first principles
NASA Astrophysics Data System (ADS)
Costa Filho, Raimundo N.; Braga, João P. M.; Lira, Jorge H. S.; Andrade, José S.
2016-04-01
A translation operator acting in a space with a diagonal metric is introduced to describe the motion of a particle in a quantum system. We show that the momentum operator and, as a consequence, the uncertainty relation now depend on the metric. It is also shown that, for any metric expanded up to second order, this formalism naturally leads to an extended uncertainty principle (EUP) with a minimum momentum dispersion. The Ehrenfest theorem is modified to include an additional term related to a tidal force arriving from the space curvature introduced by the metric. For one-dimensional systems, we show how to map a harmonic potential to an effective potential in Euclidean space using different metrics.
On the relationship between aerosol model uncertainty and radiative forcing uncertainty
NASA Astrophysics Data System (ADS)
Lee, Lindsay A.; Reddington, Carly L.; Carslaw, Kenneth S.
2016-05-01
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple “equifinal” models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model‑observation agreement could give a misleading impression of model robustness.
On the relationship between aerosol model uncertainty and radiative forcing uncertainty.
Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S
2016-05-24
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness. PMID:26848136
On the relationship between aerosol model uncertainty and radiative forcing uncertainty
Reddington, Carly L.; Carslaw, Kenneth S.
2016-01-01
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple “equifinal” models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model−observation agreement could give a misleading impression of model robustness. PMID:26848136
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
Statistical Uncertainty Analysis Applied to Criticality Calculation
Hartini, Entin; Andiwijayakusuma, Dinan; Susmikanti, Mike; Nursinta, A. W.
2010-06-22
In this paper, we present an uncertainty methodology based on a statistical approach, for assessing uncertainties in criticality prediction using monte carlo method due to uncertainties in the isotopic composition of the fuel. The methodology has been applied to criticality calculations with MCNP5 with additional stochastic input of the isotopic fuel composition. The stochastic input were generated using the latin hypercube sampling method based one the probability density function of each nuclide composition. The automatic passing of the stochastic input to the MCNP and the repeated criticality calculation is made possible by using a python script to link the MCNP and our latin hypercube sampling code.
Incorporating climate change into systematic conservation planning
Groves, Craig R.; Game, Edward T.; Anderson, Mark G.; Cross, Molly; Enquist, Carolyn; Ferdana, Zach; Girvetz, Evan; Gondor, Anne; Hall, Kimberly R.; Higgins, Jonathan; Marshall, Rob; Popper, Ken; Schill, Steve; Shafer, Sarah L.
2012-01-01
The principles of systematic conservation planning are now widely used by governments and non-government organizations alike to develop biodiversity conservation plans for countries, states, regions, and ecoregions. Many of the species and ecosystems these plans were designed to conserve are now being affected by climate change, and there is a critical need to incorporate new and complementary approaches into these plans that will aid species and ecosystems in adjusting to potential climate change impacts. We propose five approaches to climate change adaptation that can be integrated into existing or new biodiversity conservation plans: (1) conserving the geophysical stage, (2) protecting climatic refugia, (3) enhancing regional connectivity, (4) sustaining ecosystem process and function, and (5) capitalizing on opportunities emerging in response to climate change. We discuss both key assumptions behind each approach and the trade-offs involved in using the approach for conservation planning. We also summarize additional data beyond those typically used in systematic conservation plans required to implement these approaches. A major strength of these approaches is that they are largely robust to the uncertainty in how climate impacts may manifest in any given region.
Multi-thresholds for fault isolation in the presence of uncertainties.
Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel
2016-05-01
Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. PMID:26928518
Communicating scientific uncertainty
Fischhoff, Baruch; Davis, Alex L.
2014-01-01
All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390
Evaluating prediction uncertainty
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.
Conundrums with uncertainty factors.
Cooke, Roger
2010-03-01
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Berglund, F
1978-01-01
The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI
SYSTEMATIC SENSITIVITY ANALYSIS OF AIR QUALITY SIMULATION MODELS
This report reviews and assesses systematic sensitivity and uncertainty analysis methods for applications to air quality simulation models. The discussion of the candidate methods presents their basic variables, mathematical foundations, user motivations and preferences, computer...
Quantifying Uncertainty in Epidemiological Models
Ramanathan, Arvind; Jha, Sumit Kumar
2012-01-01
Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.
Uncertainty in flood risk mapping
NASA Astrophysics Data System (ADS)
Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo
2014-05-01
A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow
Classification images with uncertainty
Tjan, Bosco S.; Nandy, Anirvan S.
2009-01-01
Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
Uncertainty quantification in lattice QCD calculations for nuclear physics
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-02-05
The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Uncertainty quantification in lattice QCD calculations for nuclear physics
NASA Astrophysics Data System (ADS)
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-03-01
The numerical technique of lattice quantum chromodynamics (LQCD) holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. We review the sources of uncertainty inherent in LQCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Uncertainties in gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Lépy, M. C.; Pearce, A.; Sima, O.
2015-06-01
High resolution gamma-ray spectrometry is a well-established metrological technique that can be applied to a large number of photon-emitting radionuclides, activity levels and sample shapes and compositions. Three kinds of quantitative information can be derived using this technique: detection efficiency calibration, radionuclide activity and photon emission intensities. In contrast to other radionuclide measurement techniques gamma-ray spectrometry provides unambiguous identification of gamma-ray emitting radionuclides in addition to activity values. This extra information comes at a cost of increased complexity and inherently higher uncertainties when compared with other secondary techniques. The relative combined standard uncertainty associated with any result obtained by gamma-ray spectrometry depends not only on the uncertainties of the main input parameters but also on different correction factors. To reduce the uncertainties, the experimental conditions must be optimized in terms of the signal processing electronics and the physical parameters of the measured sample should be accurately characterized. Measurement results and detailed examination of the associated uncertainties are presented with a specific focus on the efficiency calibration, peak area determination and correction factors. It must be noted that some of the input values used in quantitative analysis calculation can be correlated, which should be taken into account in fitting procedures or calculation of the uncertainties associated with quantitative results. It is shown that relative combined standard uncertainties are rarely lower than 1% in gamma-ray spectrometry measurements.
Estimating uncertainty of inference for validation
Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Experimental Basis for Robust On-orbit Uncertainty Estimates for CLARREO InfraRed Sensors
NASA Astrophysics Data System (ADS)
Dykema, J. A.; Revercomb, H. E.; Anderson, J.
2009-12-01
As defined by the National Research Council Decadal Survey of 2006, the CLimate Absolute Radiance and REfractivity Observatory (CLARREO) satisfies the need for “a long-term global benchmark record of critical climate variables that are accurate over very long time periods, can be tested for systematic errors by future generations, are unaffected by interruption, and are pinned to international standards.” These observational requirements— testing for systematic errors, accuracy over indefinite time, and linkage to internationally recognized measurement standards—are achievable through an appeal to the concept of SI traceability. That is, measurements are made such that they are linked through an unbroken chain of comparisons, where each comparison has a stated and credible uncertainty, back to the definitions of the International System (SI) Units. While the concept of SI traceability is a straightforward one, achieving credible estimates of uncertainty, particularly in the case of complex sensors deployed in orbit, poses a significant challenge. Recently, a set of principles has been proposed to guide the development of sensors that realize fully the benefits of SI traceability. The application of these principles to the spectral infrared sensor that is part of the CLARREO mission is discussed. These principles include, but are not limited to: basing the sensor calibration on a reproducible physical property of matter, devising experimental tests for known sources of measurement bias (or systematic uncertainty), and providing independent system-level checks for the end-to-end radiometric performance of the sensor. The application of these principles to the infrared sensor leads to the following conclusions. To obtain the lowest uncertainty (or highest accuracy), the calibration should be traceable to the definition of the Kelvin—that is, the triple point of water. Realization of a Kelvin-based calibration is achieved through the use of calibration
Practical issues in handling data input and uncertainty in a budget impact analysis.
Nuijten, M J C; Mittendorf, T; Persson, U
2011-06-01
The objective of this paper was to address the importance of dealing systematically and comprehensively with uncertainty in a budget impact analysis (BIA) in more detail. The handling of uncertainty in health economics was used as a point of reference for addressing the uncertainty in a BIA. This overview shows that standard methods of sensitivity analysis, which are used for standard data set in a health economic model (clinical probabilities, treatment patterns, resource utilisation and prices/tariffs), cannot always be used for the input data for the BIA model beyond the health economic data set for various reasons. Whereas in a health economic model, only limited data may come from a Delphi panel, a BIA model often relies on a majority of data taken from a Delphi panel. In addition, the dataset in a BIA model also includes forecasts (e.g. annual growth, uptakes curves, substitution effects, changes in prescription restrictions and guidelines, future distribution of the available treatment modalities, off-label use). As a consequence, the use of standard sensitivity analyses for BIA data set might be limited because of the lack of appropriate distributions as data sources are limited, or because of the need for forecasting. Therefore, scenario analyses might be more appropriate to capture the uncertainty in the BIA data set in the overall BIA model. PMID:20364289
NASA Astrophysics Data System (ADS)
Jones, P. W.; Strelitz, R. A.
2012-12-01
The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs
Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2012-01-01
Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.
Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2013-01-01
Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.
Interpreting uncertainty terms.
Holtgraves, Thomas
2014-08-01
Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on. PMID:25090127
Uncertainty analysis of statistical downscaling methods
NASA Astrophysics Data System (ADS)
Khan, Mohammad Sajjad; Coulibaly, Paulin; Dibike, Yonas
2006-03-01
Three downscaling models namely Statistical Down-Scaling Model (SDSM), Long Ashton Research Station Weather Generator (LARS-WG) model and Artificial Neural Network (ANN) model have been compared in terms various uncertainty assessments exhibited in their downscaled results of daily precipitation, daily maximum and minimum temperatures. In case of daily maximum and minimum temperature, uncertainty is assessed by comparing monthly mean and variance of downscaled and observed daily maximum and minimum temperature at each month of the year at 95% confidence level. In addition, uncertainties of the monthly means and variances of downscaled daily temperature have been calculated using 95% confidence intervals, which are compared with the observed uncertainties of means and variances. In daily precipitation downscaling, in addition to comparing means and variances, uncertainties have been assessed by comparing monthly mean dry and wet spell lengths and their confidence intervals, cumulative frequency distributions (cdfs) of monthly mean of daily precipitation, and the distributions of monthly wet and dry days for observed and downscaled daily precipitation. The study has been carried out using 40 years of observed and downscaled daily precipitation, daily maximum and minimum temperature data using NCEP (National Center for Environmental Prediction) reanalysis predictors starting from 1961 to 2000. The uncertainty assessment results indicate that the SDSM is the most capable of reproducing various statistical characteristics of observed data in its downscaled results with 95% confidence level, the ANN is the least capable in this respect, and the LARS-WG is in between SDSM and ANN.
Stronger Schrödinger-like uncertainty relations
NASA Astrophysics Data System (ADS)
Song, Qiu-Cheng; Qiao, Cong-Feng
2016-08-01
Uncertainty relation is one of the fundamental building blocks of quantum theory. Nevertheless, the traditional uncertainty relations do not fully capture the concept of incompatible observables. Here we present a stronger Schrödinger-like uncertainty relation, which is stronger than the relation recently derived by Maccone and Pati (2014) [11]. Furthermore, we give an additive uncertainty relation which holds for three incompatible observables, which is stronger than the relation newly obtained by Kechrimparis and Weigert (2014) [12] and the simple extension of the Schrödinger uncertainty relation.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Quantifying reliability uncertainty : a proof of concept.
Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.
2009-10-01
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
Measurement uncertainty relations
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Measurement uncertainty relations
NASA Astrophysics Data System (ADS)
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-01
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Individuals’ Uncertainty about Future Social Security Benefits and Portfolio Choice
Delavande, Adeline
2013-01-01
Summary Little is known about the degree to which individuals are uncertain about their future Social Security benefits, how this varies within the U.S. population, and whether this uncertainty influences financial decisions related to retirement planning. To illuminate these issues, we present empirical evidence from the Health and Retirement Study Internet Survey and document systematic variation in respondents’ uncertainty about their future Social Security benefits by individual characteristics. We find that respondents with higher levels of uncertainty about future benefits hold a smaller share of their wealth in stocks. PMID:23914049
Serenity in political uncertainty.
Doumit, Rita; Afifi, Rema A; Devon, Holli A
2015-01-01
College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930
Systematic Effects in Atomic Fountain Clocks
NASA Astrophysics Data System (ADS)
Gibble, Kurt
2016-06-01
We describe recent advances in the accuracies of atomic fountain clocks. New rigorous treatments of the previously large systematic uncertainties, distributed cavity phase, microwave lensing, and background gas collisions, enabled these advances. We also discuss background gas collisions of optical lattice and ion clocks and derive the smooth transition of the microwave lensing frequency shift to photon recoil shifts for large atomic wave packets.
Weighted Uncertainty Relations
NASA Astrophysics Data System (ADS)
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-03-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation.
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-01-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295
NASA Technical Reports Server (NTRS)
Brown, Laurie M.
1993-01-01
An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.
NASA Astrophysics Data System (ADS)
Silverman, Mark P.
2014-07-01
1. Tools of the trade; 2. The 'fundamental problem' of a practical physicist; 3. Mother of all randomness I: the random disintegration of matter; 4. Mother of all randomness II: the random creation of light; 5. A certain uncertainty; 6. Doing the numbers: nuclear physics and the stock market; 7. On target: uncertainties of projectile flight; 8. The guesses of groups; 9. The random flow of energy I: power to the people; 10. The random flow of energy II: warning from the weather underground; Index.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Measurement uncertainty of adsorption testing of desiccant materials
Bingham, C E; Pesaran, A A
1988-12-01
The technique of measurement uncertainty analysis as described in the current ANSI/ASME standard is applied to the testing of desiccant materials in SERI`s Sorption Test Facility. This paper estimates the elemental precision and systematic errors in these tests and propagates them separately to obtain the resulting uncertainty of the test parameters, including relative humidity ({plus_minus}.03) and sorption capacity ({plus_minus}.002 g/g). Errors generated by instrument calibration, data acquisition, and data reduction are considered. Measurement parameters that would improve the uncertainty of the results are identified. Using the uncertainty in the moisture capacity of a desiccant, the design engineer can estimate the uncertainty in performance of a dehumidifier for desiccant cooling systems with confidence. 6 refs., 2 figs., 8 tabs.
Uncertainties in Hauser-Feshbach Neutron Capture Calculations for Astrophysics
Bertolli, M.G. Kawano, T.; Little, H.
2014-06-15
The calculation of neutron capture cross sections in a statistical Hauser-Feshbach method has proved successful in numerous astrophysical applications. Of increasing interest is the uncertainty associated with the calculated Maxwellian averaged cross sections (MACS). Aspects of a statistical model that introduce a large amount of uncertainty are the level density model, γ-ray strength function parameter, and the placement of E{sub low} – the cut-off energy below which the Hauser-Feshbach method is not applicable. Utilizing the Los Alamos statistical model code CoH3 we investigate the appropriate treatment of these sources of uncertainty via systematics of nuclei in a local region for which experimental or evaluated data is available. In order to show the impact of uncertainty analysis on nuclear data for astrophysical applications, these new uncertainties will be propagated through the nucleosynthesis code NuGrid.
Avoiding climate change uncertainties in Strategic Environmental Assessment
Larsen, Sanne Vammen; Kørnøv, Lone; Driscoll, Patrick
2013-11-15
This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models.
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.; Grupa, J.B.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models.
Analysis of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Nicholson, Thomas J.; Mishra, Srikanta
2003-06-24
A systematic methodology for assessing hydrogeologic conceptual model, parameter, and scenario uncertainties is being developed to support technical reviews of environmental assessments related to decommissioning of nuclear facilities. The first major task being undertaken is to produce a coupled parameter and conceptual model uncertainty assessment methodology. This task is based on previous studies that have primarily dealt individually with these two types of uncertainties. Conceptual model uncertainty analysis is based on the existence of alternative conceptual models that are generated using a set of clearly stated guidelines targeted at the needs of NRC staff. Parameter uncertainty analysis makes use of generic site characterization data as well as site-specific characterization and monitoring data to evaluate parameter uncertainty in each of the alternative conceptual models. Propagation of parameter uncertainty will be carried out through implementation of a general stochastic model of groundwater flow and transport in the saturated and unsaturated zones. Evaluation of prediction uncertainty will make use of Bayesian model averaging and visualization of model results. The goal of this study is to develop a practical tool to quantify uncertainties in the conceptual model and parameters identified in performance assessments.
Uncertainty in NIST Force Measurements
Bartel, Tom
2005-01-01
This paper focuses upon the uncertainty of force calibration measurements at the National Institute of Standards and Technology (NIST). The uncertainty of the realization of force for the national deadweight force standards at NIST is discussed, as well as the uncertainties associated with NIST’s voltage-ratio measuring instruments and with the characteristics of transducers being calibrated. The combined uncertainty is related to the uncertainty of dissemination for force transfer standards sent to NIST for calibration. PMID:27308181
NASA Astrophysics Data System (ADS)
Thyer, Mark; Renard, Benjamin; Kavetski, Dmitri; Kuczera, George; Franks, Stewart William; Srikanthan, Sri
2009-12-01
The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall-runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to address two key requirements of uncertainty assessment: (1) reliable quantification of predictive uncertainty and (2) reliable estimation of parameter uncertainty. The case study presents a challenging calibration of the lumped GR4J model to a catchment with ephemeral responses and large rainfall gradients. Postcalibration diagnostics, including checks of predictive distributions using quantile-quantile analysis, suggest that while still far from perfect, BATEA satisfied its assumed probability models better than SLS and WLS. In addition, WLS/SLS parameter estimates were highly dependent on the selected rain gauge and calibration period. This will obscure potential relationships between CRR parameters and catchment attributes and prevent the development of meaningful regional relationships. Conversely, BATEA provided consistent, albeit more uncertain, parameter estimates and thus overcomes one of the obstacles to parameter regionalization. However, significant departures from the calibration assumptions remained even in BATEA, e.g., systematic overestimation of predictive uncertainty, especially in validation. This is likely due to the inferred rainfall errors compensating for simplified treatment of model structural error.
Uncertainty in Computational Aerodynamics
NASA Technical Reports Server (NTRS)
Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.
2003-01-01
An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.
ERIC Educational Resources Information Center
Wargo, John
1985-01-01
Draws conclusions on the scientific uncertainty surrounding most chemical use regulatory decisions, examining the evolution of law and science, benefit analysis, and improving information. Suggests: (1) rapid development of knowledge of chemical risks and (2) a regulatory system which is flexible to new scientific knowledge. (DH)
Uncertainties in repository modeling
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Uncertainty and nonseparability
NASA Astrophysics Data System (ADS)
de La Torre, A. C.; Catuogno, P.; Ferrando, S.
1989-06-01
A quantum covariance function is introduced whose real and imaginary parts are related to the independent contributions to the uncertainty principle: noncommutativity of the operators and nonseparability. It is shown that factorizability of states is a sufficient but not necessary condition for separability. It is suggested that all quantum effects could be considered to be a consequence of nonseparability alone.
ICYESS 2013: Understanding and Interpreting Uncertainty
NASA Astrophysics Data System (ADS)
Rauser, F.; Niederdrenk, L.; Schemann, V.; Schmidt, A.; Suesser, D.; Sonntag, S.
2013-12-01
We will report the outcomes and highlights of the Interdisciplinary Conference of Young Earth System Scientists (ICYESS) on Understanding and Interpreting Uncertainty in September 2013, Hamburg, Germany. This conference is aimed at early career scientists (Masters to Postdocs) from a large variety of scientific disciplines and backgrounds (natural, social and political sciences) and will enable 3 days of discussions on a variety of uncertainty-related aspects: 1) How do we deal with implicit and explicit uncertainty in our daily scientific work? What is uncertain for us, and for which reasons? 2) How can we communicate these uncertainties to other disciplines? E.g., is uncertainty in cloud parameterization and respectively equilibrium climate sensitivity a concept that is understood equally well in natural and social sciences that deal with Earth System questions? Or vice versa, is, e.g., normative uncertainty as in choosing a discount rate relevant for natural scientists? How can those uncertainties be reconciled? 3) How can science communicate this uncertainty to the public? Is it useful at all? How are the different possible measures of uncertainty understood in different realms of public discourse? Basically, we want to learn from all disciplines that work together in the broad Earth System Science community how to understand and interpret uncertainty - and then transfer this understanding to the problem of how to communicate with the public, or its different layers / agents. ICYESS is structured in a way that participation is only possible via presentation, so every participant will give their own professional input into how the respective disciplines deal with uncertainty. Additionally, a large focus is put onto communication techniques; there are no 'standard presentations' in ICYESS. Keynote lectures by renowned scientists and discussions will lead to a deeper interdisciplinary understanding of what we do not really know, and how to deal with it. Many
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Optimal uncertainty quantification with model uncertainty and legacy data
NASA Astrophysics Data System (ADS)
Kamga, P.-H. T.; Li, B.; McKerns, M.; Nguyen, L. H.; Ortiz, M.; Owhadi, H.; Sullivan, T. J.
2014-12-01
We present an optimal uncertainty quantification (OUQ) protocol for systems that are characterized by an existing physics-based model and for which only legacy data is available, i.e., no additional experimental testing of the system is possible. Specifically, the OUQ strategy developed in this work consists of using the legacy data to establish, in a probabilistic sense, the level of error of the model, or modeling error, and to subsequently use the validated model as a basis for the determination of probabilities of outcomes. The quantification of modeling uncertainty specifically establishes, to a specified confidence, the probability that the actual response of the system lies within a certain distance of the model. Once the extent of model uncertainty has been established in this manner, the model can be conveniently used to stand in for the actual or empirical response of the system in order to compute probabilities of outcomes. To this end, we resort to the OUQ reduction theorem of Owhadi et al. (2013) in order to reduce the computation of optimal upper and lower bounds on probabilities of outcomes to a finite-dimensional optimization problem. We illustrate the resulting UQ protocol by means of an application concerned with the response to hypervelocity impact of 6061-T6 Aluminum plates by Nylon 6/6 impactors at impact velocities in the range of 5-7 km/s. The ability of the legacy OUQ protocol to process diverse information on the system and its ability to supply rigorous bounds on system performance under realistic-and less than ideal-scenarios demonstrated by the hypervelocity impact application is remarkable.
NASA Astrophysics Data System (ADS)
Luna, D.; Alexander, P.; de la Torre, A.
2013-09-01
The application of the Global Positioning System (GPS) radio occultation (RO) method to the atmosphere enables the determination of height profiles of temperature, among other variables. From these measurements, gravity wave activity is usually quantified by calculating the potential energy through the integration of the ratio of perturbation and background temperatures between two given altitudes in each profile. The uncertainty in the estimation of wave activity depends on the systematic biases and random errors of the measured temperature, but also on additional factors like the selected vertical integration layer and the separation method between background and perturbation temperatures. In this study, the contributions of different parameters and variables to the uncertainty in the calculation of gravity wave potential energy in the lower stratosphere are investigated and quantified. In particular, a Monte Carlo method is used to evaluate the uncertainty that results from different GPS RO temperature error distributions. In addition, our analysis shows that RO data above 30 km height becomes dubious for gravity waves potential energy calculations.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R., Jr.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
NASA Astrophysics Data System (ADS)
Weihs, Philipp; Staiger, Henning; Tinz, Birger; Batchvarova, Ekaterina; Rieder, Harald; Vuilleumier, Laurent; Maturilli, Marion; Jendritzky, Gerd
2012-05-01
In the present study, we investigate the determination accuracy of the Universal Thermal Climate Index (UTCI). We study especially the UTCI uncertainties due to uncertainties in radiation fluxes, whose impacts on UTCI are evaluated via the mean radiant temperature ( Tmrt). We assume "normal conditions", which means that usual meteorological information and data are available but no special additional measurements. First, the uncertainty arising only from the measurement uncertainties of the meteorological data is determined. Here, simulations show that uncertainties between 0.4 and 2 K due to the uncertainty of just one of the meteorological input parameters may be expected. We then analyse the determination accuracy when not all radiation data are available and modelling of the missing data is required. Since radiative transfer models require a lot of information that is usually not available, we concentrate only on the determination accuracy achievable with empirical models. The simulations show that uncertainties in the calculation of the diffuse irradiance may lead to Tmrt uncertainties of up to ±2.9 K. If long-wave radiation is missing, we may expect an uncertainty of ±2 K. If modelling of diffuse radiation and of longwave radiation is used for the calculation of Tmrt, we may then expect a determination uncertainty of ±3 K. If all radiative fluxes are modelled based on synoptic observation, the uncertainty in Tmrt is ±5.9 K. Because Tmrt is only one of the four input data required in the calculation of UTCI, the uncertainty in UTCI due to the uncertainty in radiation fluxes is less than ±2 K. The UTCI uncertainties due to uncertainties of the four meteorological input values are not larger than the 6 K reference intervals of the UTCI scale, which means that UTCI may only be wrong by one UTCI scale. This uncertainty may, however, be critical at the two temperature extremes, i.e. under extreme hot or extreme cold conditions.
Sources of uncertainty in flood inundation maps
Bales, J.D.; Wagner, C.R.
2009-01-01
Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.
Uncertainty of Pyrometers in a Casting Facility
Mee, D.K.; Elkins, J.E.; Fleenor, R.M.; Morrision, J.M.; Sherrill, M.W.; Seiber, L.E.
2001-12-07
This work has established uncertainty limits for the EUO filament pyrometers, digital pyrometers, two-color automatic pyrometers, and the standards used to certify these instruments (Table 1). If symmetrical limits are used, filament pyrometers calibrated in Production have certification uncertainties of not more than {+-}20.5 C traceable to NIST over the certification period. Uncertainties of these pyrometers were roughly {+-}14.7 C before introduction of the working standard that allowed certification in the field. Digital pyrometers addressed in this report have symmetrical uncertainties of not more than {+-}12.7 C or {+-}18.1 C when certified on a Y-12 Standards Laboratory strip lamp or in a production area tube furnace, respectively. Uncertainty estimates for automatic two-color pyrometers certified in Production are {+-}16.7 C. Additional uncertainty and bias are introduced when measuring production melt temperatures. A -19.4 C bias was measured in a large 1987 data set which is believed to be caused primarily by use of Pyrex{trademark} windows (not present in current configuration) and window fogging. Large variability (2{sigma} = 28.6 C) exists in the first 10 m of the hold period. This variability is attributed to emissivity variation across the melt and reflection from hot surfaces. For runs with hold periods extending to 20 m, the uncertainty approaches the calibration uncertainty of the pyrometers. When certifying pyrometers on a strip lamp at the Y-12 Standards Laboratory, it is important to limit ambient temperature variation (23{+-}4 C), to order calibration points from high to low temperatures, to allow 6 m for the lamp to reach thermal equilibrium (12 m for certifications below 1200 C) to minimize pyrometer bias, and to calibrate the pyrometer if error exceeds vendor specifications. A procedure has been written to assure conformance.
Temporal uncertainty of geographical information
NASA Astrophysics Data System (ADS)
Shu, Hong; Qi, Cuihong
2005-10-01
Temporal uncertainty is a crossing point of temporal and error-aware geographical information systems. In Geoinformatics, temporal uncertainty is of the same importance as spatial and thematic uncertainty of geographical information. However, until very recently, the standard organizations of ISO/TC211 and FGDC subsequently claimed that temporal uncertainty is one of geospatial data quality elements. Over the past decades, temporal uncertainty of geographical information is modeled insufficiently. To lay down a foundation of logically or physically modeling temporal uncertainty, this paper is aimed to clarify the semantics of temporal uncertainty to some extent. The general uncertainty is conceptualized with a taxonomy of uncertainty. Semantically, temporal uncertainty is progressively classified into uncertainty of time coordinates, changes, and dynamics. Uncertainty of multidimensional time (valid time, database time, and conceptual time, etc.) has been emphasized. It is realized that time scale (granularity) transition may lead to temporal uncertainty because of missing transition details. It is dialectically concluded that temporal uncertainty is caused by the complexity of the human-machine-earth system.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
On the uncertainties of stellar mass estimates via colour measurements
NASA Astrophysics Data System (ADS)
Roediger, Joel C.; Courteau, Stéphane
2015-09-01
Mass-to-light versus colour relations (MLCRs), derived from stellar population synthesis models, are widely used to estimate galaxy stellar masses (M*), yet a detailed investigation of their inherent biases and limitations is still lacking. We quantify several potential sources of uncertainty, using optical and near-infrared (NIR) photometry for a representative sample of nearby galaxies from the Virgo cluster. Our method for combining multiband photometry with MLCRs yields robust stellar masses, while errors in M* decrease as more bands are simultaneously considered. The prior assumptions in one's stellar population modelling dominate the error budget, creating a colour-dependent bias of up to 0.6 dex if NIR fluxes are used (0.3 dex otherwise). This matches the systematic errors associated with the method of spectral energy distribution (SED) fitting, indicating that MLCRs do not suffer from much additional bias. Moreover, MLCRs and SED fitting yield similar degrees of random error (˜0.1-0.14 dex) when applied to mock galaxies and, on average, equivalent masses for real galaxies with M* ˜ 108-11 M⊙. The use of integrated photometry introduces additional uncertainty in M* measurements, at the level of 0.05-0.07 dex. We argue that using MLCRs, instead of time-consuming SED fits, is justified in cases with complex model parameter spaces (involving, for instance, multiparameter star formation histories) and/or for large data sets. Spatially resolved methods for measuring M* should be applied for small sample sizes and/or when accuracies less than 0.1 dex are required. An appendix provides our MLCR transformations for 10 colour permutations of the grizH filter set.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
Rearden, Bradley T; Mueller, Don
2011-01-01
additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.
BICEP2 III: Instrumental systematics
Ade, P. A. R.
2015-11-23
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10^{–3}.
BICEP2 III: Instrumental Systematics
NASA Astrophysics Data System (ADS)
BICEP2 Collaboration; Ade, P. A. R.; Aikin, R. W.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Bock, J. J.; Brevik, J. A.; Buder, I.; Bullock, E.; Dowell, C. D.; Duband, L.; Filippini, J. P.; Fliescher, S.; Golwala, S. R.; Halpern, M.; Hasselfield, M.; Hildebrandt, S. R.; Hilton, G. C.; Irwin, K. D.; Karkare, K. S.; Kaufman, J. P.; Keating, B. G.; Kernasovskiy, S. A.; Kovac, J. M.; Kuo, C. L.; Leitch, E. M.; Lueker, M.; Netterfield, C. B.; Nguyen, H. T.; O'Brient, R.; Ogburn, R. W., IV; Orlando, A.; Pryke, C.; Richter, S.; Schwarz, R.; Sheehy, C. D.; Staniszewski, Z. K.; Sudiwala, R. V.; Teply, G. P.; Tolan, J. E.; Turner, A. D.; Vieregg, A. G.; Wong, C. L.; Yoon, K. W.
2015-12-01
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the BICEP2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in BICEP2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ∼10× below BICEP2's three-year statistical uncertainty, and negligible compared to the observed BB signal. The contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3-6) × 10-3.
BICEP2 III: Instrumental systematics
Ade, P. A. R.
2015-11-23
In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading ordermore » beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10–3.« less
Uncertainties in climate stabilization
Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.
2009-11-01
We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Thyroid disrupting chemicals in plastic additives and thyroid health.
Andra, Syam S; Makris, Konstantinos C
2012-01-01
The globally escalating thyroid nodule incidence rates may be only partially ascribed to better diagnostics, allowing for the assessment of environmental risk factors on thyroid disease. Endocrine disruptors or thyroid-disrupting chemicals (TDC) like bisphenol A, phthalates, and polybrominated diphenyl ethers are widely used as plastic additives in consumer products. This comprehensive review studied the magnitude and uncertainty of TDC exposures and their effects on thyroid hormones for sensitive subpopulation groups like pregnant women, infants, and children. Our findings qualitatively suggest the mixed, significant (α = 0.05) TDC associations with natural thyroid hormones (positive or negative sign). Future studies should undertake systematic meta-analyses to elucidate pooled TDC effect estimates on thyroid health indicators and outcomes. PMID:22690712
SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy
Stroom, J; Vieira, S; Greco, C
2014-06-01
Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coverage was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are
A Guideline for Applying Systematic Reviews to Child Language Intervention
ERIC Educational Resources Information Center
Hargrove, Patricia; Lund, Bonnie; Griffer, Mona
2005-01-01
This article focuses on applying systematic reviews to the Early Intervention (EI) literature. Systematic reviews are defined and differentiated from traditional, or narrative, reviews and from meta-analyses. In addition, the steps involved in critiquing systematic reviews and an illustration of a systematic review from the EI literature are…
Uncertainties in global ocean surface heat flux climatologies derived from ship observations
Gleckler, P.J.; Weare, B.C.
1995-08-01
A methodology to define uncertainties associated with ocean surface heat flux calculations has been developed and applied to a revised version of the Oberhuber global climatology, which utilizes a summary of the COADS surface observations. Systematic and random uncertainties in the net oceanic heat flux and each of its four components at individual grid points and for zonal averages have been estimated for each calendar month and the annual mean. The most important uncertainties of the 2{degree} x 2{degree} grid cell values of each of the heat fluxes are described. Annual mean net shortwave flux random uncertainties associated with errors in estimating cloud cover in the tropics yield total uncertainties which are greater than 25 W m{sup {minus}2}. In the northern latitudes, where the large number of observations substantially reduce the influence of these random errors, the systematic uncertainties in the utilized parameterization are largely responsible for total uncertainties in the shortwave fluxes which usually remain greater than 10 W m{sup {minus}2}. Systematic uncertainties dominate in the zonal means because spatial averaging has led to a further reduction of the random errors. The situation for the annual mean latent heat flux is somewhat different in that even for grid point values the contributions of the systematic uncertainties tend to be larger than those of the random uncertainties at most all latitudes. Latent heat flux uncertainties are greater than 20 W m{sup {minus}2} nearly everywhere south of 40{degree}N, and in excess of 30 W m{sup {minus}2} over broad areas of the subtropics, even those with large numbers of observations. Resulting zonal mean latent heat flux uncertainties are largest ({approximately}30 W m{sup {minus}2}) in the middle latitudes and subtropics and smallest ({approximately}10--25 W m{sup {minus}2}) near the equator and over the northernmost regions.
Uncertainty of the beam energy measurement in the e+e- collision using Compton backscattering
NASA Astrophysics Data System (ADS)
Mo, Xiao-Hu
2014-10-01
The beam energy is measured in the e+e- collision by using Compton backscattering. The uncertainty of this measurement process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with the Monte Carlo sampling technique. These quantitative conclusions are especially important for understanding the uncertainty of the beam energy measurement system.
Using Models that Incorporate Uncertainty
ERIC Educational Resources Information Center
Caulkins, Jonathan P.
2002-01-01
In this article, the author discusses the use in policy analysis of models that incorporate uncertainty. He believes that all models should consider incorporating uncertainty, but that at the same time it is important to understand that sampling variability is not usually the dominant driver of uncertainty in policy analyses. He also argues that…
Reviewing the literature, how systematic is systematic?
MacLure, Katie; Paudyal, Vibhu; Stewart, Derek
2016-06-01
Introduction Professor Archibald Cochrane, after whom the Cochrane Collaboration is named, was influential in promoting evidence-based clinical practice. He called for "relevant, valid research" to underpin all aspects of healthcare. Systematic reviews of the literature are regarded as a high quality source of cumulative evidence but it is unclear how truly systematic they, or other review articles, are or 'how systematic is systematic?' Today's evidence-based review industry is a burgeoning mix of specialist terminology, collaborations and foundations, databases, portals, handbooks, tools, criteria and training courses. Aim of the review This study aims to identify uses and types of reviews, key issues in planning, conducting, reporting and critiquing reviews, and factors which limit claims to be systematic. Method A rapid review of review articles published in IJCP. Results This rapid review identified 17 review articles published in IJCP between 2010 and 2015 inclusive. It explored the use of different types of review article, the variation and widely available range of guidelines, checklists and criteria which, through systematic application, aim to promote best practice. It also identified common pitfalls in endeavouring to conduct reviews of the literature systematically. Discussion Although a limited set of IJCP reviews were identified, there is clear evidence of the variation in adoption and application of systematic methods. The burgeoning evidence industry offers the tools and guidelines required to conduct systematic reviews, and other types of review, systematically. This rapid review was limited to the database of one journal over a period of 6 years. Although this review was conducted systematically, it is not presented as a systematic review. Conclusion As a research community we have yet to fully engage with readily available guidelines and tools which would help to avoid the common pitfalls. Therefore the question remains, of not just IJCP but
Analysis of automated highway system risks and uncertainties. Volume 5
Sicherman, A.
1994-10-01
This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Picturing Data With Uncertainty
NASA Technical Reports Server (NTRS)
Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex
2004-01-01
NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi
Satellite altitude determination uncertainties
NASA Technical Reports Server (NTRS)
Siry, J. W.
1972-01-01
Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.
Addressing uncertainty in adaptation planning for agriculture
Vermeulen, Sonja J.; Challinor, Andrew J.; Thornton, Philip K.; Campbell, Bruce M.; Eriyagama, Nishadi; Vervoort, Joost M.; Kinyangi, James; Jarvis, Andy; Läderach, Peter; Ramirez-Villegas, Julian; Nicklin, Kathryn J.; Hawkins, Ed; Smith, Daniel R.
2013-01-01
We present a framework for prioritizing adaptation approaches at a range of timeframes. The framework is illustrated by four case studies from developing countries, each with associated characterization of uncertainty. Two cases on near-term adaptation planning in Sri Lanka and on stakeholder scenario exercises in East Africa show how the relative utility of capacity vs. impact approaches to adaptation planning differ with level of uncertainty and associated lead time. An additional two cases demonstrate that it is possible to identify uncertainties that are relevant to decision making in specific timeframes and circumstances. The case on coffee in Latin America identifies altitudinal thresholds at which incremental vs. transformative adaptation pathways are robust options. The final case uses three crop–climate simulation studies to demonstrate how uncertainty can be characterized at different time horizons to discriminate where robust adaptation options are possible. We find that impact approaches, which use predictive models, are increasingly useful over longer lead times and at higher levels of greenhouse gas emissions. We also find that extreme events are important in determining predictability across a broad range of timescales. The results demonstrate the potential for robust knowledge and actions in the face of uncertainty. PMID:23674681
What's new in atopic eczema? An analysis of systematic reviews published in 2008 and 2009.
Batchelor, J M; Grindlay, D J C; Williams, H C
2010-12-01
This review summarizes clinically important findings from nine systematic reviews of the causes, treatment and prevention of atopic eczema (AE) published between August 2008 and August 2009. Two systematic reviews concluded that there is a strong and consistent association between filaggrin (FLG) mutations and development of eczema. The associations between FLG mutations and atopic sensitization, rhinitis and asthma are weaker than between FLG mutations and eczema, especially if those who also have eczema are excluded. The relationship between transforming growth factor levels in breast milk and eczema development is still unclear. A further systematic review found no strong evidence of a protective effect of exclusive breastfeeding for at least 3 months against eczema, even in those with a positive family history of atopy. Based on a systematic review and meta-analysis of six randomized controlled trials, supplementation with omega-3 and omega-6 oils is unlikely to play an important role in the primary prevention of eczema or allergic diseases in general. There is little evidence to support dietary restrictions of certain foods in unselected children with AE. There is also little evidence to suggest a clinically useful benefit from using probiotics in patients with established eczema. A systematic review of topical pimecrolimus and tacrolimus added little additional information to previous reviews, and did not provide any new data on long-term safety. Both of these drugs work in AE, and may reduce flares and usage of topical corticosteroids; however, there is still uncertainty about how they compare with topical corticosteroids. PMID:20649899
Flight Departure Delay and Rerouting Under Uncertainty in En Route Convective Weather
NASA Technical Reports Server (NTRS)
Mukherjee, Avijit; Grabbe, Shon; Sridhar, Banavar
2011-01-01
Delays caused by uncertainty in weather forecasts can be reduced by improving traffic flow management decisions. This paper presents a methodology for traffic flow management under uncertainty in convective weather forecasts. An algorithm for assigning departure delays and reroutes to aircraft is presented. Departure delay and route assignment are executed at multiple stages, during which, updated weather forecasts and flight schedules are used. At each stage, weather forecasts up to a certain look-ahead time are treated as deterministic and flight scheduling is done to mitigate the impact of weather on four-dimensional flight trajectories. Uncertainty in weather forecasts during departure scheduling results in tactical airborne holding of flights. The amount of airborne holding depends on the accuracy of forecasts as well as the look-ahead time included in the departure scheduling. The weather forecast look-ahead time is varied systematically within the experiments performed in this paper to analyze its effect on flight delays. Based on the results, longer look-ahead times cause higher departure delays and additional flying time due to reroutes. However, the amount of airborne holding necessary to prevent weather incursions reduces when the forecast look-ahead times are higher. For the chosen day of traffic and weather, setting the look-ahead time to 90 minutes yields the lowest total delay cost.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Using data assimilation for systematic model improvement
NASA Astrophysics Data System (ADS)
Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil
2016-04-01
In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval. PMID:17460183
Uncertainty, entropy, and non-Gaussianity for mixed states
NASA Astrophysics Data System (ADS)
Mandilara, Aikaterini; Karpov, Evgueni; Cerf, Nicolas J.
2010-06-01
In the space of mixed states the Schrödinger-Robertson uncertainty relation holds though it can never be saturated. Two tight extensions of this relation in the space of mixed states exist; one proposed by Dodonov and Man'ko, where the lower limit on the uncertainty depends on the purity of the state, and another where the uncertainty is bounded by the von Neumann entropy of the state proposed by Bastiaans. Driven by the needs that have emerged in the field of quantum information, in a recent work we have extended the puritybounded uncertainty relation by adding an additional parameter characterizing the state, namely its degree of non-Gaussianity. In this work we alternatively present a extension of the entropy-bounded uncertainty relation. The common points and differences between the two extensions of the uncertainty relation help us to draw more general conclusions concerning the bounds on the non-Gaussianity of mixed states.
NASA Astrophysics Data System (ADS)
Hobson, Art
2011-10-01
An earlier paper2 introduces quantum physics by means of four experiments: Youngs double-slit interference experiment using (1) a light beam, (2) a low-intensity light beam with time-lapse photography, (3) an electron beam, and (4) a low-intensity electron beam with time-lapse photography. It's ironic that, although these experiments demonstrate most of the quantum fundamentals, conventional pedagogy stresses their difficult and paradoxical nature. These paradoxes (i.e., logical contradictions) vanish, and understanding becomes simpler, if one takes seriously the fact that quantum mechanics is the nonrelativistic limit of our most accurate physical theory, namely quantum field theory, and treats the Schroedinger wave function, as well as the electromagnetic field, as quantized fields.2 Both the Schroedinger field, or "matter field," and the EM field are made of "quanta"—spatially extended but energetically discrete chunks or bundles of energy. Each quantum comes nonlocally from the entire space-filling field and interacts with macroscopic systems such as the viewing screen by collapsing into an atom instantaneously and randomly in accordance with the probability amplitude specified by the field. Thus, uncertainty and nonlocality are inherent in quantum physics. This paper is about quantum uncertainty. A planned later paper will take up quantum nonlocality.
The maintenance of uncertainty
NASA Astrophysics Data System (ADS)
Smith, L. A.
Introduction Preliminaries State-space dynamics Linearized dynamics of infinitesimal uncertainties Instantaneous infinitesimal dynamics Finite-time evolution of infinitesimal uncertainties Lyapunov exponents and predictability The Baker's apprentice map Infinitesimals and predictability Dimensions The Grassberger-Procaccia algorithm Towards a better estimate from Takens' estimators Space-time-separation diagrams Intrinsic limits to the analysis of geometry Takens' theorem The method of delays Noise Prediction, prophecy, and pontification Introduction Simulations, models and physics Ground rules Data-based models: dynamic reconstructions Analogue prediction Local prediction Global prediction Accountable forecasts of chaotic systems Evaluating ensemble forecasts The annulus Prophecies Aids for more reliable nonlinear analysis Significant results: surrogate data, synthetic data and self-deception Surrogate data and the bootstrap Surrogate predictors: Is my model any good? Hints for the evaluation of new techniques Avoiding simple straw men Feasibility tests for the identification of chaos On detecting "tiny" data sets Building models consistent with the observations Cost functions ι-shadowing: Is my model any good? (reprise) Casting infinitely long shadows (out-of-sample) Distinguishing model error and system sensitivity Forecast error and model sensitivity Accountability Residual predictability Deterministic or stochastic dynamics? Using ensembles to distinguish the expectation from the expected Numerical Weather Prediction Probabilistic prediction with a deterministic model The analysis Constructing and interpreting ensembles The outlook(s) for today Conclusion Summary
Antarctic Photochemistry: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Stewart, Richard W.; McConnell, Joseph R.
1999-01-01
Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.