Science.gov

Sample records for additional systematic uncertainty

  1. On LBNE neutrino flux systematic uncertainties

    SciTech Connect

    Lebrun, Paul L. G.; Hylen, James; Marchionni, Alberto; Fields, Laura; Bashyal, Amit; Park, Seongtae; Watson, Blake

    2015-10-15

    The systematic uncertainties in the neutrino flux of the Long-Baseline Neutrino Experiment, due to alignment uncertanties and tolerances of the neutrino beamline components, are estimated. In particular residual systematics are evaluated in the determination of the neutrino flux at the far detector, assuming that the experiment will be equipped with a near detector with the same target material of the far detector, thereby canceling most of the uncertainties from hadroproduction and neutrino cross sections. This calculation is based on a detailed Geant4-based model of the neutrino beam line that includes the target, two focusing horns, the decay pipe and ancillary items, such as shielding.

  2. Planck 2015 results. III. LFI systematic uncertainties

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Burigana, C.; Butler, R. C.; Calabrese, E.; Catalano, A.; Christensen, P. R.; Colombo, L. P. L.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Frailis, M.; Franceschet, C.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Meinhold, P. R.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Nati, F.; Natoli, P.; Noviello, F.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Pearson, T. J.; Perdereau, O.; Pettorino, V.; Piacentini, F.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Stolyarov, V.; Stompor, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.

    2016-08-01

    We present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal ("null-maps"). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrum by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10-20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.

  3. ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES

    SciTech Connect

    Dolphin, Andrew E.

    2012-05-20

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  4. Systematic uncertainties from halo asphericity in dark matter searches

    SciTech Connect

    Bernal, Nicolás; Forero-Romero, Jaime E.; Garani, Raghuveer; Palomares-Ruiz, Sergio E-mail: je.forero@uniandes.edu.co E-mail: sergio.palomares.ruiz@ific.uv.es

    2014-09-01

    Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm {sup 3}. Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.

  5. Systematic and random uncertainties of HOAPS-3.2 evaporation

    NASA Astrophysics Data System (ADS)

    Kinzel, Julian; Fennig, Karsten; Schröder, Marc; Andersson, Axel; Bumke, Karl; Dietzsch, Felix

    2015-04-01

    The German Research Foundation (DFG) funds the research programme 'FOR1740 - Atlantic freshwater cycle', which aims at analysing and better understanding the freshwater budget of the Atlantic Ocean and the role of freshwater fluxes (evaporation minus precipitation) in context of oceanic surface salinity variability. It is well-known that these freshwater fluxes play an essential role in the global hydrological cycle and thus act as a key boundary condition for coupled ocean-atmosphere general circulation models. However, it remains unclear as to how uncertain evaporation (E) and precipitation (P ) are. Once quantified, freshwater flux fields and their underlying total uncertainty (systematic plus random) may be assimilated into ocean models to compute ocean transports and run-off estimates, which in turn serve as a stringent test on the quality of the input data. The Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data (HOAPS) (Andersson et al. (2010), Fennig et al. (2012)) is an entirely satellite-based climatology, based on microwave radiometers, overcoming the lack of oceanic in-situ records. Its most current version, HOAPS-3.2, comprises 21 years (1987-2008) of pixel-level resolution data of numerous geophysical parameters over the global ice-free oceans. Amongst others, these include wind speed (u), near-surface specific humidity (q), and sea surface temperature (SST). Their uncertainties essentially contribute to the uncertainty in latent heat flux (LHF) and consequently to that of evaporation (E). Here, we will present HOAPS-3.2 pixel-level total uncertainty estimates of evaporation, based on a full error propagation of uncertainties in u, q, and SST. Both systematic and random uncertainty components are derived on the basis of collocated match-ups of satellite pixels, selected buoys, and ship records. The in-situ data is restricted to 1995 until 2008 and is provided by the Seewetteramt Hamburg as well as ICOADS Version 2.5 (Woodruff et al

  6. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  7. Systematic Uncertainties in High-Energy Hadronic Interaction Models

    NASA Astrophysics Data System (ADS)

    Zha, M.; Knapp, J.; Ostapchenko, S.

    2003-07-01

    Hadronic interaction models for cosmic ray energies are uncertain since our knowledge of hadronic interactions is extrap olated from accelerator experiments at much lower energies. At present most high-energy models are based on Grib ov-Regge theory of multi-Pomeron exchange, which provides a theoretical framework to evaluate cross-sections and particle production. While experimental data constrain some of the model parameters, others are not well determined and are therefore a source of systematic uncertainties. In this paper we evaluate the variation of results obtained with the QGSJET model, when modifying parameters relating to three ma jor sources of uncertainty: the form of the parton structure function, the role of diffractive interactions, and the string hadronisation. Results on inelastic cross sections, on secondary particle production and on the air shower development are discussed.

  8. A systematic uncertainty analysis for liner impedance eduction technology

    NASA Astrophysics Data System (ADS)

    Zhou, Lin; Bodén, Hans

    2015-11-01

    The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.

  9. Quantifying Systematic Errors and Total Uncertainties in Satellite-based Precipitation Measurements

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2010-12-01

    Determining the uncertainties in precipitation measurements by satellite remote sensing is of fundamental importance to many applications. These uncertainties result mostly from the interplay of systematic errors and random errors. In this presentation, we will summarize our recent efforts in quantifying the error characteristics in satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMaP). For systematic errors, we devised an error decomposition to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals more error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. Our analysis reveals that the six different products share many error features. For example, they all detected strong precipitation (> 40 mm/day) well, but with various biases. They tend to over-estimate in summer and under-estimate in winter. They miss a significant amount of light precipitation (< 10 mm/day). In addition, hit biases and missed precipitation are the two leading error sources. However, their systematic errors also exhibit substantial differences, especially in winter and over rough topography, which greatly contribute to the uncertainties. To estimate the measurement uncertainties, we calculated the measurement spread from the ensemble of these six quasi-independent products. A global map of measurement uncertainties was thus produced. The map yields a global view of the error characteristics and their regional and seasonal variations, and reveals many undocumented error features over areas with no validation data available. The uncertainties are relatively small (40-60%) over the

  10. Taming systematic uncertainties at the LHC with the central limit theorem

    NASA Astrophysics Data System (ADS)

    Fichet, Sylvain

    2016-10-01

    We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties, such as the detector effects occurring in LHC measurements.

  11. Systematic uncertainties in the precise determination of the strangeness magnetic moment of the nucleon

    SciTech Connect

    D.B. Leinweber; S. Boinepalli; A.W. Thomas; A.G. Williams; R.D. Young; J.B. Zhang; J.M. Zanotti

    2004-06-01

    Systematic uncertainties in the recent precise determination of the strangeness magnetic moment of the nucleon are identified and quantified. In summary, G{sub M}{sup s} = -0.046 {+-} 0.019 {micro}{sub N}.

  12. Additional challenges for uncertainty analysis in river engineering

    NASA Astrophysics Data System (ADS)

    Berends, Koen; Warmink, Jord; Hulscher, Suzanne

    2016-04-01

    The management of rivers for improving safety, shipping and environment requires conscious effort on the part of river managers. River engineers design hydraulic works to tackle various challenges, from increasing flow conveyance to ensuring minimal water depths for environmental flow and inland shipping. Last year saw the completion of such large scale river engineering in the 'Room for the River' programme for the Dutch Rhine River system, in which several dozen of human interventions were built to increase flood safety. Engineering works in rivers are not completed in isolation from society. Rather, their benefits - increased safety, landscaping beauty - and their disadvantages - expropriation, hindrance - directly affect inhabitants. Therefore river managers are required to carefully defend their plans. The effect of engineering works on river dynamics is being evaluated using hydraulic river models. Two-dimensional numerical models based on the shallow water equations provide the predictions necessary to make decisions on designs and future plans. However, like all environmental models, these predictions are subject to uncertainty. In recent years progress has been made in the identification of the main sources of uncertainty for hydraulic river models. Two of the most important sources are boundary conditions and hydraulic roughness (Warmink et al. 2013). The result of these sources of uncertainty is that the identification of single, deterministic prediction model is a non-trivial task. This is this is a well-understood problem in other fields as well - most notably hydrology - and known as equifinality. However, the particular case of human intervention modelling with hydraulic river models compounds the equifinality case. The model that provides the reference baseline situation is usually identified through calibration and afterwards modified for the engineering intervention. This results in two distinct models, the evaluation of which yields the effect of

  13. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  14. Quantifying uncertainty of determination by standard additions and serial dilutions methods taking into account standard uncertainties in both axes.

    PubMed

    Hyk, Wojciech; Stojek, Zbigniew

    2013-06-18

    The analytical expressions for the calculation of the standard uncertainty of the predictor variable either extrapolated or interpolated from a calibration line that takes into account uncertainties in both axes have been derived and successfully verified using the Monte Carlo modeling. These expressions are essential additions to the process of the analyte quantification realized with either the method of standard additions (SAM) or the method of serial dilutions (MSD). The latter one has been proposed as an alternative approach to the SAM procedure. In the MSD approach instead of the sequence of standard additions, the sequence of solvent additions to the spiked sample is performed. The comparison of the calculation results based on the expressions derived to their equivalents obtained from the Monte Carlo simulation, applied to real experimental data sets, confirmed that these expressions are valid in real analytical practice. The estimation of the standard uncertainty of the analyte concentration, quantified via either SAM or MSD or simply a calibration curve, is of great importance for the construction of the uncertainty budget of an analytical procedure. The correct estimation of the standard uncertainty of the analyte concentration is a key issue in the quality assurance in the instrumental analysis.

  15. Jet energy measurement and its systematic uncertainty in proton-proton collisions at TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abajyan, T.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adomeit, S.; Adye, T.; Aefsky, S.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmad, A.; Ahmadov, F.; Aielli, G.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alam, M. A.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alonso, F.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Ammosov, V. V.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Arfaoui, S.; Arguin, J.-F.; Argyropoulos, S.; Arik, E.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ask, S.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Astbury, A.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Bacci, C.; Bach, A. M.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bailey, D. C.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, S.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belloni, A.; Beloborodova, O. L.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernard, C.; Bernat, P.; Bernhard, R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertolucci, F.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Besson, N.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Bittner, B.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blocki, J.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Boddy, C. R.; Boehler, M.; Boek, J.; Boek, T. T.; Boelaert, N.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bolnet, N. M.; Bomben, M.; Bona, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Branchini, P.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Broggi, F.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, G.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Budick, B.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Bunse, M.; Burckhart, H.; Burdin, S.; Burgess, T.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Caloi, R.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarri, P.; Cameron, D.; Caminada, L. M.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Caso, C.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chan, K.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chavda, V.; Chavez Barajas, C. A.; Cheatham, S.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Cheng, Y.; Cheplakov, A.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Christidi, I. A.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Cirilli, M.; Cirkovic, P.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coelli, S.; Coffey, L.; Cogan, J. G.; Coggeshall, J.; Colas, J.; Cole, B.; Cole, S.; Colijn, A. P.; Collins-Tooth, C.; Collot, J.; Colombo, T.; Colon, G.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Conidi, M. C.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Courneyea, L.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Crispin Ortuzar, M.; Cristinziani, M.; Crosetti, G.; Cuciuc, C.-M.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darlea, G. L.; Darmora, S.; Dassoulas, J. A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De La Taille, C.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; De Zorzi, G.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Degenhardt, J.; Del Peso, J.; Del Prete, T.; Delemontex, T.; Deliot, F.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demilly, A.; Demirkoz, B.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P. O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; do Vale, M. A. B.; Do Valle Wemans, A.; Doan, T. K. O.; Dobos, D.; Dobson, E.; Dodd, J.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Dwuznik, M.; Ebke, J.; Edson, W.; Edwards, C. A.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienne, F.; Etienvre, A. I.; Etzion, E.; Evangelakou, D.; Evans, H.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Fehling-Kaschek, M.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, J.; Fisher, M. J.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Florez Bustos, A. C.; Flowerdew, M. J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gandrajula, R. P.; Gao, J.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gillman, A. R.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giovannini, P.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giunta, M.; Gjelsten, B. K.; Gkialas, I.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godfrey, J.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez Silva, M. L.; Gonzalez-Sevilla, S.; Goodson, J. J.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Gozpinar, S.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, F.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Gray, J. A.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grigalashvili, N.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Groth-Jensen, J.; Grout, Z. J.; Grybel, K.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Gunther, J.; Guo, J.; Gupta, S.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haefner, P.; Hageboeck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansson, P.; Hara, K.; Hard, A. S.; Harenberg, T.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Hensel, C.; Herbert, G. H.; Hernandez, C. M.; Hernández Jiménez, Y.; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hofmann, J. I.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; Hooft van Huysduynen, L.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huettmann, A.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Ideal, E.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, J. N.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jared, R. C.; Jarlskog, G.; Jeanty, L.; Jeng, G.-Y.; Jen-La Plante, I.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Jha, M. K.; Ji, H.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Joffe, D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jungst, R. M.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kadlecik, P.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalinin, S.; Kama, S.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kanno, T.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Karnevskiy, M.; Karpov, S. N.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Keener, P. T.; Kehoe, R.; Keil, M.; Keller, J. S.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khoroshilov, A.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitamura, T.; Kittelmann, T.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koenig, S.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, S.; Kotov, V. M.; Kotwal, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwee, R.; La Rosa, A.; La Rotonda, L.; Labarga, L.; Lablak, S.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Laisne, E.; Lambourne, L.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavorini, V.; Lavrijsen, W.; Laycock, P.; Le, B. T.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmacher, M.; Lehmann Miotto, G.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leonhardt, K.; Leontsinis, S.; Leroy, C.; Lessard, J.-R.; Lester, C. G.; Lester, C. M.; Levêque, J.; Levin, D.; Levinson, L. J.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, S.; Li, X.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Limper, M.; Lin, S. C.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loddenkoetter, T.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, J. D.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Loscutoff, P.; Losty, M. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Lukas, W.; Luminari, L.; Lund, E.; Lundberg, J.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Maček, B.; Machado Miguens, J.; Macina, D.; Mackeprang, R.; Madar, R.; Madaras, R. J.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Magnoni, L.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Mahout, G.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J. A.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marques, C. N.; Marroquim, F.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, J. P.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Matsunaga, H.; Matsushita, T.; Mättig, P.; Mättig, S.; Mattmann, J.; Mattravers, C.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Mazzanti, M.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; Mclaughlan, T.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meehan, S.; Meera-Lebbai, R.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Meloni, F.; Mendoza Navas, L.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer, J.; Michal, S.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Miñano Moya, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Mitsui, S.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Moeller, V.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Molfetas, A.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Mora Herrera, C.; Moraes, A.; Morange, N.; Morel, J.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morii, M.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Murillo Quijada, J. A.; Murray, W. J.; Mussche, I.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Napier, A.; Narayan, R.; Nash, M.; Nattermann, T.; Naumann, T.; Navarro, G.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neusiedl, A.; Neves, R. M.; Nevski, P.; Newcomer, F. M.; Newman, P. R.; Nguyen, D. H.; Nguyen Thi Hong, V.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Novakova, J.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nuncio-Quiroz, A.-E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Brien, B. J.; O'Grady, F.; O'Neil, D. C.; O'Shea, V.; Oakes, L. B.; Oakham, F. G.; Oberlack, H.; Ocariz, J.; Ochi, A.; Ochoa, M. I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohshima, T.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Olivares Pino, S. A.; Oliveira, M.; Oliveira Damazio, D.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, S.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagan Griso, S.; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panduro Vazquez, J. G.; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pashapour, S.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Cavalcanti, T.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, J.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petteni, M.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piec, S. M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pizio, C.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pomeroy, D.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Pospelov, G. E.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Prabhu, R.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przybycien, M.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qian, J.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Quilty, D.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Randle-Conde, A. S.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinsch, A.; Reisin, H.; Reisinger, I.; Relich, M.; Rembser, C.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rivoltella, G.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocha de Lima, J. G.; Roda, C.; Roda Dos Santos, D.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romeo, G.; Romero Adam, E.; Rompotis, N.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, A.; Rose, M.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Rutherfoord, J. P.; Ruthmann, N.; Ruzicka, P.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B. M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarkisyan-Grinbaum, E.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Sauvan, E.; Sauvan, J. B.; Savard, P.; Savinov, V.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaelicke, A.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schroer, N.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scott, W. G.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shank, J. T.; Shao, Q. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Sherwood, P.; Shimizu, S.; Shimojima, M.; Shin, T.; Shiyakova, M.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simoniello, R.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinnari, L. A.; Skottowe, H. P.; Skovpen, K. Yu.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snow, J.; Snyder, S.; Sobie, R.; Socher, F.; Sodomka, J.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solovyanov, O. V.; Solovyev, V.; Soni, N.; Sood, A.; Sopko, V.; Sopko, B.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spighi, R.; Spigo, G.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Stahlman, J.; Stamen, R.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Stucci, S. A.; Stugu, B.; Stumer, I.; Stupak, J.; Sturm, P.; Styles, N. A.; Su, D.; Su, J.; Subramania, HS.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tamsett, M. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tani, K.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, C.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thoma, S.; Thomas, J. P.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Triplett, N.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsung, J.-W.; Tsuno, S.; Tsybychev, D.; Tua, A.; Tudorache, A.; Tudorache, V.; Tuggle, J. M.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Berg, R.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vassilakopoulos, V. I.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vitells, O.; Viti, M.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vokac, P.; Volpi, G.; Volpi, M.; Volpini, G.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, W.; Wagner, P.; Wahrmund, S.; Wakabayashi, J.; Walch, S.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watanabe, I.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, A. T.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weigell, P.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Whittington, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilhelm, I.; Wilkens, H. G.; Will, J. Z.; Williams, H. H.; Williams, S.; Willis, W.; Willocq, S.; Wilson, J. A.; Wilson, A.; Wingerter-Seez, I.; Winkelmann, S.; Winklmeier, F.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wong, W. C.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wraight, K.; Wright, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, C.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, U. K.; Yang, Y.; Yanush, S.; Yao, L.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zaytsev, A.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zevi della Porta, G.; Zhang, D.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, X.; Zhang, Z.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zitoun, R.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.

    2015-01-01

    The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton-proton collision data with a centre-of-mass energy of TeV corresponding to an integrated luminosity of . Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti- algorithm with distance parameters or , and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a boson, for and pseudorapidities . The effect of multiple proton-proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region () for jets with . For central jets at lower , the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton-proton collisions and test-beam data, which also provide the estimate for TeV. The calibration of forward jets is derived from dijet balance measurements. The resulting uncertainty reaches its largest value of 6 % for low- jets at . Additional JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5-3 %.

  16. A Systematic Procedure for Assigning Uncertainties to Data Evaluations

    SciTech Connect

    Younes, W

    2007-02-20

    In this report, an algorithm that automatically constructs an uncertainty band around any evaluation curve is described. Given an evaluation curve and a corresponding set of experimental data points with x and y error bars, the algorithm expands a symmetric region around the evaluation curve until 68.3% of a set of points, randomly sampled from the experimental data, fall within the region. For a given evaluation curve, the region expanded in this way represents, by definition, a one-standard-deviation interval about the evaluation that accounts for the experimental data. The algorithm is tested against several benchmarks, and is shown to be well-behaved, even when there are large gaps in the available experimental data. The performance of the algorithm is assessed quantitatively using the tools of statistical-inference theory.

  17. Assessment of water quality management with a systematic qualitative uncertainty analysis.

    PubMed

    Chen, Chi-Feng; Ma, Hwong-wen; Reckhow, Kenneth H

    2007-03-01

    Uncertainty is an inevitable source of noise in water quality management and will weaken the adequacy of decisions. Uncertainty is derived from imperfect information, natural variability, and knowledge-based inconsistency. To make better decisions, it is necessary to reduce uncertainty. Conventional uncertainty analyses have focused on quantifying the uncertainty of parameters and variables in a probabilistic framework. However, the foundational properties and basic constraints might influence the entire system more than the quantifiable elements and have to be considered in initial analysis steps. According to binary classification, uncertainty includes quantitative uncertainty and non-quantitative uncertainty, which is also called qualitative uncertainty. Qualitative uncertainty originates from human subjective and biased beliefs. This study provides an understanding of qualitative uncertainty in terms of its conceptual definitions and practical applications. A systematic process of qualitative uncertainty analysis is developed for assisting complete uncertainty analysis, in which a qualitative network could then be built with qualitative relationship and quantifiable functions. In the proposed framework, a knowledge elicitation procedure is required to identify influential factors and their interrelationship. To limit biased information, a checklist is helpful to construct the qualitative network. The checklist helps one to ponder arbitrary assumptions that have often been taken for granted and may yield an incomplete or inappropriate decision analysis. The total maximum daily loads (TMDL) program is used as a surrogate for water quality management in this study. 15 uncertainty causes of TMDL programs are elicited by reviewing an influence diagram, and a checklist is formed with tabular interrogations corresponding to each uncertainty cause. The checklist enables decision makers to gain insight on the uncertainty level of the system at early steps as a

  18. Uncertainty about nest position influences systematic search strategies in desert ants.

    PubMed

    Merkle, Tobias; Knaden, Markus; Wehner, Rüdiger

    2006-09-01

    Foraging desert ants return to their starting point, the nest, by means of path integration. If the path-integration vector has been run off but the nest has not yet been reached, the ants engage in systematic search behavior. This behavior results in a system of search loops of ever increasing size and finally leads to a search density profile peaking at the location where the path integration system has been reset to zero. In this study we investigate whether this systematic search behavior is adapted to the uncertainty resulting from the preceding foraging run. We show first that the longer the distances of the foraging excursions, the larger the errors occurring during path integration, and second that the ants adapt their systematic search strategy to their increasing uncertainty by extending their search pattern. Hence, the density of the systematic search pattern is correlated with the ants' confidence in their path integrator. This confidence decreases with increasing foraging distances.

  19. SUPERNOVA CONSTRAINTS AND SYSTEMATIC UNCERTAINTIES FROM THE FIRST THREE YEARS OF THE SUPERNOVA LEGACY SURVEY

    SciTech Connect

    Conley, A.; Carlberg, R. G.; Perrett, K. M.; Guy, J.; Regnault, N.; Astier, P.; Balland, C.; Hardin, D.; Pain, R.; Sullivan, M.; Hook, I. M.; Basa, S.; Fouchez, D.; Howell, D. A.; Palanque-Delabrouille, N.; Rich, J.; Ruhlmann-Kleider, V.; Baumont, S.

    2011-01-15

    We combine high-redshift Type Ia supernovae from the first three years of the Supernova Legacy Survey (SNLS) with other supernova (SN) samples, primarily at lower redshifts, to form a high-quality joint sample of 472 SNe (123 low-z, 93 SDSS, 242 SNLS, and 14 Hubble Space Telescope). SN data alone require cosmic acceleration at >99.999% confidence, including systematic effects. For the dark energy equation of state parameter (assumed constant out to at least z = 1.4) in a flat universe, we find w = -0.91{sup +0.16} {sub -0.20}(stat){sup +0.07} {sub -0.14}(sys) from SNe only, consistent with a cosmological constant. Our fits include a correction for the recently discovered relationship between host-galaxy mass and SN absolute brightness. We pay particular attention to systematic uncertainties, characterizing them using a systematic covariance matrix that incorporates the redshift dependence of these effects, as well as the shape-luminosity and color-luminosity relationships. Unlike previous work, we include the effects of systematic terms on the empirical light-curve models. The total systematic uncertainty is dominated by calibration terms. We describe how the systematic uncertainties can be reduced with soon to be available improved nearby and intermediate-redshift samples, particularly those calibrated onto USNO/SDSS-like systems.

  20. Systematic uncertainties associated with the cosmological analysis of the first Pan-STARRS1 type Ia supernova sample

    SciTech Connect

    Scolnic, D.; Riess, A.; Brout, D.; Rodney, S.; Rest, A.; Huber, M. E.; Tonry, J. L.; Foley, R. J.; Chornock, R.; Berger, E.; Soderberg, A. M.; Stubbs, C. W.; Kirshner, R. P.; Challis, P.; Czekala, I.; Drout, M.; Narayan, G.; Smartt, S. J.; Botticella, M. T.; Schlafly, E.; and others

    2014-11-01

    We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w =−1.120{sub −0.206}{sup +0.360}(Stat){sub −0.291}{sup +0.269}(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H {sub 0} measurements, we find w=−1.166{sub −0.069}{sup +0.072} and Ω{sub m}=0.280{sub −0.012}{sup +0.013} (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = –1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: w{sub BAO+H0+SN+WMAP}=−1.124{sub −0.065}{sup +0.083}.

  1. Systematic study of the uncertainties in fitting the cosmic positron data by AMS-02

    SciTech Connect

    Yuan, Qiang; Bi, Xiao-Jun E-mail: bixj@ihep.ac.cn

    2015-03-01

    The operation of AMS-02 opens a new era for the study of cosmic ray physics with unprecedentedly precise data which are comparable with the laboratory measurements. The high precision data allow a quantitative study on the cosmic ray physics and give strict constraints on the nature of cosmic ray sources. However, the intrinsic errors from the theoretical models to interpret the data become dominant over the errors in the data. In the present work we try to give a systematic study on the uncertainties of the models to explain the AMS-02 positron fraction data, which shows the cosmic ray e{sup +}e{sup −} excesses together with the PAMELA and Fermi-LAT measurements. The excesses can be attributed to contributions from the extra e{sup +}e{sup −} sources, such as pulsars or the dark matter annihilation. The possible systematic uncertainties of the theoretical models considered include the cosmic ray propagation, the treatment of the low energy data, the solar modulation, the pp interaction models, the nuclei injection spectrum and so on. We find that in general a spectral hardening of the primary electron injection spectrum above ∼50–100 GeV is favored by the data. Furthermore, the present model uncertainties may lead to a factor of ∼2 enlargement in the determination of the parameter regions of the extra source, such as the dark matter mass, annihilation rate and so on.

  2. The effect of random and systematic measurement uncertainties on temporal and spatial upscaling of N2O fluxes

    NASA Astrophysics Data System (ADS)

    Cowan, Nicholas; Levy, Peter; Skiba, Ute

    2016-04-01

    The addition of reactive nitrogen to agricultural soils in the form of artificial fertilisers or animal waste is the largest global source of anthropogenic N2O emissions. Emission factors are commonly used to evaluate N2O emissions released after the application of nitrogen fertilisers on a global scale based on records of fertiliser use. Currently these emission factors are estimated primarily by a combination of results of experiments in which flux chamber methodology is used to estimate annual cumulative fluxes of N2O after nitrogen fertiliser applications on agricultural soils. The use of the eddy covariance method to measure N2O and estimate emission factors is also becoming more common in the flux community as modern rapid gas analyser instruments advance. The aim of the presentation is to highlight the weaknesses and potential systematic biases in current flux measurement methodology. This is important for GHG accounting and for accurate model calibration and verification. The growing interest in top-down / bottom-up comparisons of tall tower and conventional N2O flux measurements is also an area of research in which the uncertainties in flux measurements needs to be properly quantified. The large and unpredictable spatial and temporal variability of N2O fluxes from agricultural soils leads to a significant source of uncertainty in emission factor estimates. N2O flux measurements typically show poor relationships with explanatory co-variates. The true uncertainties in flux measurements at the plot scale are often difficult to propagate to field scale and the annual time scale. This results in very uncertain cumulative flux (emission factor) estimates. Cumulative fluxes estimated using flux chamber and eddy covariance methods can also differ significantly which complicates the matter further. In this presentation, we examine some effects that spatial and temporal variability of N2O fluxes can have on the estimation of emission factors and describe how

  3. A new approach to handle additive and multiplicative uncertainties in the measurement for ? LPV filtering

    NASA Astrophysics Data System (ADS)

    Lacerda, Márcio J.; Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.

    2016-04-01

    This paper presents a general framework to cope with full-order ? linear parameter-varying (LPV) filter design subject to inexactly measured parameters. The main novelty is the ability of handling additive and multiplicative uncertainties in the measurements, for both continuous and discrete-time LPV systems, in a unified approach. By conveniently modelling scheduling parameters and uncertainties affecting the measurements, the ? filter design problem can be expressed in terms of robust matrix inequalities that become linear when two scalar parameters are fixed. Therefore, the proposed conditions can be efficiently solved through linear matrix inequality relaxations based on polynomial solutions. Numerical examples are presented to illustrate the improved efficiency of the proposed approach when compared to other methods and, more important, its capability to deal with scenarios where the available strategies in the literature cannot be used.

  4. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  5. Community health nursing practices in contexts of poverty, uncertainty and unpredictability: a systematization of personal experiences.

    PubMed

    Laperrière, Hélène

    2007-01-01

    Several years of professional nursing practices, while living in the poorest neighbourhoods in the outlying areas of Brazil's Amazon region, have led the author to develop a better understanding of marginalized populations. Providing care to people with leprosy and sex workers in riverside communities has taken place in conditions of uncertainty, insecurity, unpredictability and institutional violence. The question raised is how we can develop community health nursing practices in this context. A systematization of personal experiences based on popular education is used and analyzed as a way of learning by obtaining scientific knowledge through critical analysis of field practices. Ties of solidarity and belonging developed in informal, mutual-help action groups are promising avenues for research and the development of knowledge in health promotion, prevention and community care and a necessary contribution to national public health programmers.

  6. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  7. Systematic Uncertainties in Characterizing Cluster Outskirts: The Case of Abell 133

    NASA Astrophysics Data System (ADS)

    Paine, Jennie; Ogrean, Georgiana A.; Nulsen, Paul; Farrah, Duncan

    2016-01-01

    The outskirts of galaxy clusters have low surface brightness compared to the X-ray background, making accurate background subtraction particularly important for analyzing cluster spectra out to and beyond the virial radius. We analyze the thermodynamic properties of the intracluster medium (ICM) of Abell 133 and assess the extent to which uncertainties on background subtraction affect measured quantities. We implement two methods of analyzing the ICM spectra: one in which the blank-sky background is subtracted, and another in which the sky background is modeled. We find that the two methods are consistent within the 90% confidence ranges. We were able to measure the thermodynamic properties of the cluster up to R500. Even at R500, the systematic uncertainties associated with the sky background in the direction of A133 are small, despite the ICM signal constituting only ~25% of the total signal. This work was supported in part by the NSF REU and DoD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution. GAO acknowledges support by NASA through a Hubble Fellowship grant HST-HF2-51345.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.

  8. Systematic evaluation of an atomic clock at 2 × 10(-18) total uncertainty.

    PubMed

    Nicholson, T L; Campbell, S L; Hutson, R B; Marti, G E; Bloom, B J; McNally, R L; Zhang, W; Barrett, M D; Safronova, M S; Strouse, G F; Tew, W L; Ye, J

    2015-04-21

    The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our (87)Sr optical lattice clock now achieves fractional stability of 2.2 × 10(-16) at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10(-18) in fractional frequency units.

  9. Systematic evaluation of an atomic clock at 2 × 10(-18) total uncertainty.

    PubMed

    Nicholson, T L; Campbell, S L; Hutson, R B; Marti, G E; Bloom, B J; McNally, R L; Zhang, W; Barrett, M D; Safronova, M S; Strouse, G F; Tew, W L; Ye, J

    2015-01-01

    The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our (87)Sr optical lattice clock now achieves fractional stability of 2.2 × 10(-16) at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10(-18) in fractional frequency units. PMID:25898253

  10. Systematic evaluation of an atomic clock at 2 × 10−18 total uncertainty

    PubMed Central

    Nicholson, T.L.; Campbell, S.L.; Hutson, R.B.; Marti, G.E.; Bloom, B.J.; McNally, R.L.; Zhang, W.; Barrett, M.D.; Safronova, M.S.; Strouse, G.F.; Tew, W.L.; Ye, J.

    2015-01-01

    The pursuit of better atomic clocks has advanced many research areas, providing better quantum state control, new insights in quantum science, tighter limits on fundamental constant variation and improved tests of relativity. The record for the best stability and accuracy is currently held by optical lattice clocks. Here we take an important step towards realizing the full potential of a many-particle clock with a state-of-the-art stable laser. Our 87Sr optical lattice clock now achieves fractional stability of 2.2 × 10−16 at 1 s. With this improved stability, we perform a new accuracy evaluation of our clock, reducing many systematic uncertainties that limited our previous measurements, such as those in the lattice ac Stark shift, the atoms' thermal environment and the atomic response to room-temperature blackbody radiation. Our combined measurements have reduced the total uncertainty of the JILA Sr clock to 2.1 × 10−18 in fractional frequency units. PMID:25898253

  11. Determination of electron beam polarization using electron detector in Compton polarimeter with less than 1% statistical and systematic uncertainty

    SciTech Connect

    Narayan, Amrendra

    2015-05-01

    The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a

  12. Determination of electron beam polarization using electron detector in Compton polarimeter with less than 1% statistical and systematic uncertainty

    NASA Astrophysics Data System (ADS)

    Narayan, Amrendra

    The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (~1GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a

  13. Systematic Analysis of Resolution and Uncertainties in Gravity Interpretation of Bathymetry Beneath Floating Ice

    NASA Astrophysics Data System (ADS)

    Cochran, J. R.; Tinto, K. J.; Elieff, S. H.; Bell, R. E.

    2011-12-01

    Airborne geophysical surveys in West Antarctica and Greenland carried out during Operation IceBridge (OIB) utilized the Sander Geophysics AIRGrav gravimeter, which collects high quality data during low-altitude, draped flights. This data has been used to determine bathymetry beneath ice shelves and floating ice tongues (e.g., Tinto et al, 2010, Cochran et al, 2010). This paper systematically investigates uncertainties arising from survey, instrumental and geologic constraints in this type of study and the resulting resolution of the bathymetry model. Gravity line data is low-pass filtered with time-based filters to remove high frequency noise. The spatial filter length is dependent on aircraft speed. For parameters used in OIB (70-140 s filters and 270-290 knots), spatial filter half-wavelengths are ~5-10 km. The half-wavelength does not define a lower limit to the width of feature that can be detected, but shorter wavelength features may appear wider with a lower amplitude. Resolution can be improved either by using a shorter filter or by flying slower. Both involve tradeoffs; a shorter filter allows more noise and slower speeds result in less coverage. These filters are applied along tracks, rather than in a region surrounding a measurement. In areas of large gravity relief, tracks in different directions can sample a very different range of gravity values within the length of the filter. We show that this can lead to crossover mismatches of >5 mGal, complicating interpretation. For dense surveys, gridding the data and then sampling the grid at the measurement points can minimize this effect. Resolution is also affected by the elevation of survey flights. For a distributed mass, the gravity amplitude decreases with distance and short-wavelength components attenuate faster. This is not a serious issue for OIB, which flew draped flights <500 m above the ice surface, but is a serious factor for gravimeters that require a constant elevation above the highest

  14. Jet energy measurement and its systematic uncertainty in proton–proton collisions at √s = 7 TeV with the ATLAS detector

    SciTech Connect

    Aad, G.

    2015-01-15

    The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or

  15. Jet energy measurement and its systematic uncertainty in proton–proton collisions at √s = 7 TeV with the ATLAS detector

    DOE PAGES

    Aad, G.

    2015-01-15

    The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transversemore » momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light

  16. A program for confidence interval calculations for a Poisson process with background including systematic uncertainties: POLE 1.0

    NASA Astrophysics Data System (ADS)

    Conrad, Jan

    2004-04-01

    A Fortran 77 routine has been developed to calculate confidence intervals with and without systematic uncertainties using a frequentist confidence interval construction with a Bayesian treatment of the systematic uncertainties. The routine can account for systematic uncertainties in the background prediction and signal/background efficiencies. The uncertainties may be separately parametrized by a Gauss, log-normal or flat probability density function (PDF), though since a Monte Carlo approach is chosen to perform the necessary integrals a generalization to other parameterizations is particularly simple. Full correlation between signal and background efficiency is optional. The ordering schemes for frequentist construction currently supported are the likelihood ratio ordering (also known as Feldman-Cousins) and Neyman ordering. Optionally, both schemes can be used with conditioning, meaning the probability density function is conditioned on the fact that the actual outcome of the background process can not have been larger than the number of observed events. Program summaryTitle of program: POLE version 1.0 Catalogue identifier: ADTA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTA Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Computer for which the program is designed: DELL PC 1 GB 2.0 Ghz Pentium IV Operating system under which the program has been tested: RH Linux 7.2 Kernel 2.4.7-10 Programming language used: Fortran 77 Memory required to execute with typical data: ˜1.6 Mbytes No. of bytes in distributed program, including test data, etc.: 373745 No. of lines in distributed program, including test data, etc.: 2700 Distribution format: tar gzip file Keywords: Confidence interval calculation, Systematic uncertainties Nature of the physical problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with known background in presence of

  17. Sensory uncertainty leads to systematic misperception of the direction of motion in depth.

    PubMed

    Fulvio, Jacqueline M; Rosen, Monica L; Rokers, Bas

    2015-07-01

    Although we have made major advances in understanding motion perception based on the processing of lateral (2D) motion signals on computer displays, the majority of motion in the real (3D) world occurs outside of the plane of fixation, and motion directly toward or away from observers has particular behavioral relevance. Previous work has reported a systematic lateral bias in the perception of 3D motion, such that an object on a collision course with an observer's head is frequently judged to miss it, with obvious negative consequences. To better understand this bias, we systematically investigated the accuracy of 3D motion perception while manipulating sensory noise by varying the contrast of a moving target and its position in depth relative to fixation. Inconsistent with previous work, we found little bias under low sensory noise conditions. With increased sensory noise, however, we revealed a novel perceptual phenomenon: observers demonstrated a surprising tendency to confuse the direction of motion-in-depth, such that approaching objects were reported to be receding and vice versa. Subsequent analysis revealed that the lateral and motion-in-depth components of observers' reports are similarly affected, but that the effects on the motion-in-depth component (i.e., the motion-in-depth confusions) are much more apparent than those on the lateral component. In addition to revealing this novel visual phenomenon, these results shed new light on errors that can occur in motion perception and provide a basis for continued development of motion perception models. Finally, our findings suggest methods to evaluate the effectiveness of 3D visualization environments, such as 3D movies and virtual reality devices.

  18. Systematic uncertainties in RF-based measurement of superconducting cavity quality factors

    NASA Astrophysics Data System (ADS)

    Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.

    2016-09-01

    Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.

  19. Systematic uncertainties in RF-based measurement of superconducting cavity quality factors

    DOE PAGES

    Holzbauer, J. P.; Pischalnikov, Yu.; Sergatskov, D. A.; Schappert, W.; Smith, S.

    2016-05-10

    Q0 determinations based on RF power measurements are subject to at least three potentially large systematic effects that have not been previously appreciated. Here, instrumental factors that can systematically bias RF based measurements of Q0 are quantified and steps that can be taken to improve the determination of Q0 are discussed.

  20. DO WE REALLY KNOW THE DUST? SYSTEMATICS AND UNCERTAINTIES OF THE MID-INFRARED SPECTRAL ANALYSIS METHODS

    SciTech Connect

    Juhasz, A.; Henning, Th.; Bouwman, J.; Dullemond, C. P.; Pascucci, I.; Apai, D.

    2009-04-20

    The spectral region around 10 {mu}m, showing prominent emission bands from various dust species is commonly used for the evaluation of the chemical composition of protoplanetary dust. Different methods of analysis have been proposed for this purpose, but so far, no comparative test has been performed to test the validity of their assumptions. In this paper, we evaluate how good the various methods are in deriving the chemical composition of dust grains from infrared spectroscopy. Synthetic spectra of disk models with different geometries and central sources were calculated, using a two-dimensional radiative transfer code. These spectra were then fitted in a blind test by four spectral decomposition methods. We studied the effect of disk structure (flared versus flat), inclination angle, size of an inner disk hole, and stellar luminosity on the fitted chemical composition. Our results show that the dust parameters obtained by all methods deviate systematically from the input data of the synthetic spectra. The dust composition fitted by the new two-layer temperature distribution method, described in this paper, differs the least from the input dust composition and the results show the weakest systematic effects. The reason for the deviations of the results given by the previously used methods lies in their simplifying assumptions. Due to the radial extent of the 10 {mu}m emitting region there is dust at different temperatures contributing to the flux in the silicate feature. Therefore, the assumption of a single averaged grain temperature can be a strong limitation of the previously used methods. The continuum below the feature can consist of multiple components (e.g., star, inner rim, and disk midplane), which cannot simply be described by a Planck function at a single temperature. In addition, the optically thin emission of 'featureless' grains (e.g., carbon in the considered wavelength range) produces a degeneracy in the models with the optically thick emission of

  1. Trapped ion 88Sr+ optical clock systematic uncertainties - AC Stark shift determination

    NASA Astrophysics Data System (ADS)

    Barwood, GP; Huang, G.; King, SA; Klein, HA; Gill, P.

    2016-06-01

    A recent comparison between two trapped-ion 88Sr+ optical clocks at the UK. National Physical Laboratory demonstrated agreement to 4 parts in 1017. One of the uncertainty contributions to the optical clock absolute frequency arises from the blackbody radiation shift which in turn depends on uncertainty in the knowledge of the differential polarisability between the two clocks states. Whilst a recent NRC measurement has determined the DC differential polarisability to high accuracy, there has been no experimental verification to date of the dynamic correction to the DC Stark shift. We report a measurement of the scalar AC Stark shift at 1064 nm with measurements planned at other wavelengths. Our preliminary result using a fibre laser at 1064 nm agrees with calculated values to within ∼3%.

  2. A new approach to systematic uncertainties and self-consistency in helium abundance determinations

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2010-05-01

    Tests of big bang nucleosynthesis and early universe cosmology require precision measurements for helium abundance determinations. However, efforts to determine the primordial helium abundance via observations of metal poor H II regions have been limited by significant uncertainties (compared with the value inferred from BBN theory using the CMB determined value of the baryon density). This work builds upon previous work by providing an updated and extended program in evaluating these uncertainties. Procedural consistency is achieved by integrating the hydrogen based reddening correction with the helium based abundance calculation, i.e., all physical parameters are solved for simultaneously. We include new atomic data for helium recombination and collisional emission based upon recent work by Porter \\etal and wavelength dependent corrections to underlying absorption are investigated. The set of physical parameters has been expanded here to include the effects of neutral hydrogen collisional emission. It is noted that Hγ and Hδ allow better isolation of the collisional effects from the reddening. Because of a degeneracy between the solutions for density and temperature, the precision of the helium abundance determinations is limited. Also, at lower temperatures (T ∼< 13,000 K) the neutral hydrogen fraction is poorly constrained resulting in a larger uncertainty in the helium abundances. Thus, the derived errors on the helium abundances for individual objects are larger than those typical of previous studies. Seven previously analyzed, ''high quality'' H II region spectra are used for a primordial helium abundance determination. The updated emissivities and neutral hydrogen correction generally raise the abundance. From a regression to zero metallicity, we find Y{sub p} as 0.2561 ± 0.0108, in broad agreement with the WMAP result. Alternatively, a simple average of the data yields Y{sub p} 0.2566 ± 0.0028. Tests with synthetic data show a potential for distinct

  3. Clinical uncertainties, health service challenges, and ethical complexities of HIV "test-and-treat": a systematic review.

    PubMed

    Kulkarni, Sonali P; Shah, Kavita R; Sarma, Karthik V; Mahajan, Anish P

    2013-06-01

    Despite the HIV "test-and-treat" strategy's promise, questions about its clinical rationale, operational feasibility, and ethical appropriateness have led to vigorous debate in the global HIV community. We performed a systematic review of the literature published between January 2009 and May 2012 using PubMed, SCOPUS, Global Health, Web of Science, BIOSIS, Cochrane CENTRAL, EBSCO Africa-Wide Information, and EBSCO CINAHL Plus databases to summarize clinical uncertainties, health service challenges, and ethical complexities that may affect the test-and-treat strategy's success. A thoughtful approach to research and implementation to address clinical and health service questions and meaningful community engagement regarding ethical complexities may bring us closer to safe, feasible, and effective test-and-treat implementation.

  4. M Dwarf Metallicities and Giant Planet Occurrence: Ironing Out Uncertainties and Systematics

    NASA Astrophysics Data System (ADS)

    Gaidos, Eric; Mann, Andrew W.

    2014-08-01

    Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are -0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ~1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.

  5. M dwarf metallicities and giant planet occurrence: Ironing out uncertainties and systematics

    SciTech Connect

    Gaidos, Eric; Mann, Andrew W.

    2014-08-10

    Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are –0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ∼1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.

  6. Probiotics as Additives on Therapy in Allergic Airway Diseases: A Systematic Review of Benefits and Risks

    PubMed Central

    Das, Rashmi Ranjan; Naik, Sushree Samiksha; Singh, Meenu

    2013-01-01

    Background. We conducted a systematic review to find out the role of probiotics in treatment of allergic airway diseases.  Methods. A comprehensive search of the major electronic databases was done till March 2013. Trials comparing the effect of probiotics versus placebo were included. A predefined set of outcome measures were assessed. Continuous data were expressed as standardized mean difference with 95% CI. Dichotomous data were expressed as odds ratio with 95% CI. P value < 0.05 was considered as significant. Results. A total of 12 studies were included. Probiotic intake was associated with a significantly improved quality of life score in patients with allergic rhinitis (SMD −1.9 (95% CI −3.62, −0.19); P = 0.03), though there was a high degree of heterogeneity. No improvement in quality of life score was noted in asthmatics. Probiotic intake also improved the following parameters: longer time free from episodes of asthma and rhinitis and decrease in the number of episodes of rhinitis per year. Adverse events were not significant. Conclusion. As the current evidence was generated from few trials with high degree of heterogeneity, routine use of probiotics as an additive on therapy in subjects with allergic airway diseases cannot be recommended. PMID:23956972

  7. Measurement uncertainty.

    PubMed

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  8. Colour-colour diagrams and extragalactic globular cluster ages. Systematic uncertainties using the (V - K) - (V - I) diagram

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.

    2007-01-01

    Context: Age and metallicity estimates for extragalactic globular clusters, from integrated colour-colour diagrams, are examined. Aims: We investigate biases in cluster ages and [Fe/H] estimated from the (V-K)-(V-I) diagram, arising from inconsistent Horizontal Branch morphology, metal mixture, treatment of core convection between observed clusters and the theoretical colour grid employed for age and metallicity determinations. We also study the role played by statistical fluctuations of the observed colours, caused by the low total mass of typical globulars. Methods: Synthetic samples of globular cluster systems are created, by means of Monte-Carlo techniques. Each sample accounts for a different possible source of bias, among the ones addressed in this investigation. Cumulative age and [Fe/H] distributions are then retrieved by comparisons with a reference theoretical colour-colour grid, and analyzed. Results: Horizontal Branch morphology is potentially the largest source of uncertainty. A single-age system harbouring a large fraction of clusters with an HB morphology systematically bluer than the one accounted for in the theoretical colour grid, can simulate a bimodal population with an age difference as large as ~8 Gyr. When only the redder clusters are considered, this uncertainty is almost negligible, unless there is an extreme mass loss along the Red Giant Branch phase. The metal mixture affects mainly the redder clusters; the effect of colour fluctuations becomes negligible for the redder clusters, or when the integrated MV is brighter than ~-8.5 mag. The treatment of core convection is relevant for ages below ~4 Gyr. The retrieved cumulative [Fe/H] distributions are overall only mildly affected. Colour fluctuations and convective core extension have the largest effect. When 1σ photometric errors reach 0.10 mag, all biases found in our analysis are erased, and bimodal age populations with age differences of up to ~8 Gyr go undetected. The use of both (U

  9. Using Principal Component and Tidal Analysis as a Quality Metric for Detecting Systematic Heading Uncertainty in Long-Term Acoustic Doppler Current Profiler Data

    NASA Astrophysics Data System (ADS)

    Morley, M. G.; Mihaly, S. F.; Dewey, R. K.; Jeffries, M. A.

    2015-12-01

    Ocean Networks Canada (ONC) operates the NEPTUNE and VENUS cabled ocean observatories to collect data on physical, chemical, biological, and geological ocean conditions over multi-year time periods. Researchers can download real-time and historical data from a large variety of instruments to study complex earth and ocean processes from their home laboratories. Ensuring that the users are receiving the most accurate data is a high priority at ONC, requiring quality assurance and quality control (QAQC) procedures to be developed for all data types. While some data types have relatively straightforward QAQC tests, such as scalar data range limits that are based on expected observed values or measurement limits of the instrument, for other data types the QAQC tests are more comprehensive. Long time series of ocean currents from Acoustic Doppler Current Profilers (ADCP), stitched together from multiple deployments over many years is one such data type where systematic data biases are more difficult to identify and correct. Data specialists at ONC are working to quantify systematic compass heading uncertainty in long-term ADCP records at each of the major study sites using the internal compass, remotely operated vehicle bearings, and more analytical tools such as principal component analysis (PCA) to estimate the optimal instrument alignments. In addition to using PCA, some work has been done to estimate the main components of the current at each site using tidal harmonic analysis. This paper describes the key challenges and presents preliminary PCA and tidal analysis approaches used by ONC to improve long-term observatory current measurements.

  10. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  11. Meta-code for systematic analysis of chemical addition (SACHA): application to fluorination of C70 and carbon nanostructure growth.

    PubMed

    Ewels, Christopher P; Lier, Gregory Van; Geerlings, Paul; Charlier, Jean-Christophe

    2007-01-01

    We present a new computer program able to systematically study chemical addition to and growth or evolution of carbon nanostructures. SACHA is a meta-code able to exploit a wide variety of pre-existing molecular structure codes, automating the otherwise onerous task of constructing, running, and analyzing the large number of input files that are required when exploring structural isomers and addition paths. By way of examples we consider fluorination of the fullerene cage C70 and carbon nanostructure growth through C2 addition. We discuss the possibilities for extension of this technique to rapidly and efficiently explore structural energy landscapes and application to other areas of chemical and materials research.

  12. Cross calibration of the JLab, Hall C, Compton and Møller polarimeters and a study of systematic uncertainties of the Compton electron detector

    NASA Astrophysics Data System (ADS)

    Narayan, Amrendra

    2014-03-01

    A Compton polarimeter was commissioned at Jefferson Lab, Hall C, for continuous non-invasive measurement of the electron beam polarization. It uses ~ 1 . 5 kW of green light for the e-> - γ-> scattering. The polarimeter has several planes of diamond micro-strip detectors to detect the Compton scattered electrons and a PbWO4 crystal for detecting back-scattered photons. It was successfully used to measure the electron beam polarization along with periodic polarization measurements by the standard Moller polarimeter. The diamond micro-strip electron detector provided a standalone measurement of the beam polarization with < 1 % statistical uncertainty per hour, for a 1.16 GeV, 180 μ A electron beam. The systematic uncertainties are projected to be better than 1%. We will discuss the various contributions to the systematic uncertainties for the electron detector. We also collected data at low current for a Moller-Compton cross calibration. The preliminary results from the analysis of these data will be presented. This work was supported by DOE grant number: DE-FG02-07ER41528 for ``Precision Measurements at Medium Energy.''

  13. Systematic Uncertainties in the Spectroscopic Measurements of Neutron-star Masses and Radii from Thermonuclear X-Ray Bursts. III. Absolute Flux Calibration

    NASA Astrophysics Data System (ADS)

    Güver, Tolga; Özel, Feryal; Marshall, Herman; Psaltis, Dimitrios; Guainazzi, Matteo; Díaz-Trigo, Maria

    2016-09-01

    Many techniques for measuring neutron star radii rely on absolute flux measurements in the X-rays. As a result, one of the fundamental uncertainties in these spectroscopic measurements arises from the absolute flux calibrations of the detectors being used. Using the stable X-ray burster, GS 1826-238, and its simultaneous observations by Chandra HETG/ACIS-S and RXTE/PCA as well as by XMM-Newton EPIC-pn and RXTE/PCA, we quantify the degree of uncertainty in the flux calibration by assessing the differences between the measured fluxes during bursts. We find that the RXTE/PCA and the Chandra gratings measurements agree with each other within their formal uncertainties, increasing our confidence in these flux measurements. In contrast, XMM-Newton EPIC-pn measures 14.0 ± 0.3% less flux than the RXTE/PCA. This is consistent with the previously reported discrepancy with the flux measurements of EPIC-pn, compared with EPIC MOS1, MOS2, and ACIS-S detectors. We also show that any intrinsic time-dependent systematic uncertainty that may exist in the calibration of the satellites has already been implicity taken into account in the neutron star radius measurements.

  14. A systematic study of well-known electrolyte additives in LiCoO2/graphite pouch cells

    NASA Astrophysics Data System (ADS)

    Wang, David Yaohui; Sinha, N. N.; Petibon, R.; Burns, J. C.; Dahn, J. R.

    2014-04-01

    The effectiveness of well-known electrolyte additives singly or in combination on LiCoO2/graphite pouch cells has been systematically investigated and compared using the ultra high precision charger (UHPC) at Dalhousie University and electrochemical impedance spectroscopy (EIS). UHPC studies are believed to identify the best electrolyte additives singly or in combination within a short time period (several weeks). Three parameters: 1) the coulombic efficiency (CE); 2) the charge endpoint capacity slippage (slippage) and 3) the charge transfer resistance (Rct), of LiCoO2/graphite pouch cells with different electrolyte additives singly or in combination were measured and the results for over 55 additive sets are compared. The experimental results suggest that a combination of electrolyte additives can be more effective than a single electrolyte additive. However, of all the additive sets tested, simply using 2 wt.% vinylene carbonate yielded cells very competitive in CE, slippage and Rct. It is hoped that this comprehensive report can be used as a guide and reference for the study of other electrolyte additives singly or in combination.

  15. Uncertainties of modeling gross primary productivity over Europe: A systematic study on the effects of using different drivers and terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Jung, Martin; Vetter, Mona; Herold, Martin; Churkina, Galina; Reichstein, Markus; Zaehle, Soenke; Ciais, Philippe; Viovy, Nicolas; Bondeau, Alberte; Chen, Youmin; Trusilova, Kristina; Feser, Frauke; Heimann, Martin

    2007-12-01

    Continental to global-scale modeling of the carbon cycle using process-based models is subject to large uncertainties. These uncertainties originate from the model structure and uncertainty in model forcing fields; however, little is known about their relative importance. A thorough understanding and quantification of uncertainties is necessary to correctly interpret carbon cycle simulations and guide further model developments. This study elucidates the effects of different state-of-the-art land cover and meteorological data set options and biosphere models on simulations of gross primary productivity (GPP) over Europe. The analysis is based on (1) three different process-oriented terrestrial biosphere models (Biome-BGC, LPJ, and Orchidee) driven with the same input data and one model (Biome-BGC) driven with (2) two different meteorological data sets (ECMWF and REMO), (3) three different land cover data sets (GLC2000, MODIS, and SYNMAP), and (4) three different spatial resolutions of the land cover (0.25° fractional, 0.25° dominant, and 0.5° dominant). We systematically investigate effects on the magnitude, spatial pattern, and interannual variation of GPP. While changing the land cover map or the spatial resolution has only little effect on the model outcomes, changing the meteorological drivers and especially the model results in substantial differences. Uncertainties of the meteorological forcings affect particularly strongly interannual variations of simulated GPP. By decomposing modeled GPP into their biophysical and ecophysiological components (absorbed photosynthetic active radiation (APAR) and radiation use efficiency (RUE), respectively) we show that differences of interannual GPP variations among models result primarily from differences of simulating RUE. Major discrepancies appear to be related to the feedback through the carbon-nitrogen interactions in one model (Biome-BGC) and water stress effects, besides the modeling of croplands. We suggest

  16. Systematics of the family Plectopylidae in Vietnam with additional information on Chinese taxa (Gastropoda, Pulmonata, Stylommatophora)

    PubMed Central

    Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro

    2015-01-01

    Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253

  17. SU-E-CAMPUS-J-05: Quantitative Investigation of Random and Systematic Uncertainties From Hardware and Software Components in the Frameless 6DBrainLAB ExacTrac System

    SciTech Connect

    Keeling, V; Jin, H; Hossain, S; Ahmad, S; Ali, I

    2014-06-15

    Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) were registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0

  18. Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  19. Impacts of nitrogen addition on plant biodiversity in mountain grasslands depend on dose, application duration and climate: a systematic review.

    PubMed

    Humbert, Jean-Yves; Dwyer, John M; Andrey, Aline; Arlettaz, Raphaël

    2016-01-01

    Although the influence of nitrogen (N) addition on grassland plant communities has been widely studied, it is still unclear whether observed patterns and underlying mechanisms are constant across biomes. In this systematic review, we use meta-analysis and metaregression to investigate the influence of N addition (here referring mostly to fertilization) upon the biodiversity of temperate mountain grasslands (including montane, subalpine and alpine zones). Forty-two studies met our criteria of inclusion, resulting in 134 measures of effect size. The main general responses of mountain grasslands to N addition were increases in phytomass and reductions in plant species richness, as observed in lowland grasslands. More specifically, the analysis reveals that negative effects on species richness were exacerbated by dose (ha(-1) year(-1) ) and duration of N application (years) in an additive manner. Thus, sustained application of low to moderate levels of N over time had effects similar to short-term application of high N doses. The climatic context also played an important role: the overall effects of N addition on plant species richness and diversity (Shannon index) were less pronounced in mountain grasslands experiencing cool rather than warm summers. Furthermore, the relative negative effect of N addition on species richness was more pronounced in managed communities and was strongly negatively related to N-induced increases in phytomass, that is the greater the phytomass response to N addition, the greater the decline in richness. Altogether, this review not only establishes that plant biodiversity of mountain grasslands is negatively affected by N addition, but also demonstrates that several local management and abiotic factors interact with N addition to drive plant community changes. This synthesis yields essential information for a more sustainable management of mountain grasslands, emphasizing the importance of preserving and restoring grasslands with both low

  20. Impacts of nitrogen addition on plant biodiversity in mountain grasslands depend on dose, application duration and climate: a systematic review.

    PubMed

    Humbert, Jean-Yves; Dwyer, John M; Andrey, Aline; Arlettaz, Raphaël

    2016-01-01

    Although the influence of nitrogen (N) addition on grassland plant communities has been widely studied, it is still unclear whether observed patterns and underlying mechanisms are constant across biomes. In this systematic review, we use meta-analysis and metaregression to investigate the influence of N addition (here referring mostly to fertilization) upon the biodiversity of temperate mountain grasslands (including montane, subalpine and alpine zones). Forty-two studies met our criteria of inclusion, resulting in 134 measures of effect size. The main general responses of mountain grasslands to N addition were increases in phytomass and reductions in plant species richness, as observed in lowland grasslands. More specifically, the analysis reveals that negative effects on species richness were exacerbated by dose (ha(-1) year(-1) ) and duration of N application (years) in an additive manner. Thus, sustained application of low to moderate levels of N over time had effects similar to short-term application of high N doses. The climatic context also played an important role: the overall effects of N addition on plant species richness and diversity (Shannon index) were less pronounced in mountain grasslands experiencing cool rather than warm summers. Furthermore, the relative negative effect of N addition on species richness was more pronounced in managed communities and was strongly negatively related to N-induced increases in phytomass, that is the greater the phytomass response to N addition, the greater the decline in richness. Altogether, this review not only establishes that plant biodiversity of mountain grasslands is negatively affected by N addition, but also demonstrates that several local management and abiotic factors interact with N addition to drive plant community changes. This synthesis yields essential information for a more sustainable management of mountain grasslands, emphasizing the importance of preserving and restoring grasslands with both low

  1. A 20 million year record of planktic foraminiferal B/Ca ratios: Systematics and uncertainties in pCO 2 reconstructions

    NASA Astrophysics Data System (ADS)

    Tripati, Aradhna K.; Roberts, Christopher D.; Eagle, Robert A.; Li, Gaojun

    2011-05-01

    We use new and published data representing a 20 million long record to discuss the systematics of interpreting planktic foraminiferal B/Ca ratios. B/Ca-based reconstructions of seawater carbonate chemistry and atmospheric pCO 2 assume that the incorporation of boron into foraminiferal tests can be empirically described by an apparent partition coefficient, KD={B/Ca}/{B(OH4-/HCO)} ( Hemming and Hanson, 1992). It has also been proposed that there is a species-specific relationship between K D and temperature ( Yu et al., 2007). As we discuss, although these relationships may be robust, there remain significant uncertainties over the controls on boron incorporation into foraminifera. It is difficult to be certain that the empirically defined correlation between temperature and K D is not simply a result of covariance of temperature and other hydrographic variables in the ocean, including carbonate system parameters. There is also some evidence that K D may be affected by solution [HCO3-]/[CO32-] ratios (i.e., pH), or by [CO32-]. In addition, the theoretical basis for the definition of K D and for a temperature control on K D is of debate. We also discuss the sensitivity of pCO 2 reconstructions to different K D-temperature calibrations and seawater B/Ca. If a K D-temperature calibration is estimated using ice core pCO 2 values between 0 and 200 ka, B/Ca ratios can be used to reasonably approximate atmospheric pCO 2 between 200 and 800 ka; however, the absolute values of pCO 2 calculated are sensitive to the choice of K D-temperature relationship. For older time periods, the absolute values of pCO 2 are also dependent on the evolution of seawater B concentrations. However, we find that over the last 20 Ma, reconstructed changes in declining pCO 2 across the Mid-Pleistocene Transition, Pliocene glacial intensification, and the Middle Miocene Climate Transition are supported by the B/Ca record even if a constant coretop K D is used, or different K D

  2. Systematic analysis of the in situ crosstalk of tyrosine modifications reveals no additional natural selection on multiply modified residues

    PubMed Central

    Pan, Zhicheng; Liu, Zexian; Cheng, Han; Wang, Yongbo; Gao, Tianshun; Ullah, Shahid; Ren, Jian; Xue, Yu

    2014-01-01

    Recent studies have indicated that different post-translational modifications (PTMs) synergistically orchestrate specific biological processes by crosstalks. However, the preference of the crosstalk among different PTMs and the evolutionary constraint on the PTM crosstalk need further dissections. In this study, the in situ crosstalk at the same positions among three tyrosine PTMs including sulfation, nitration and phosphorylation were systematically analyzed. The experimentally identified sulfation, nitration and phosphorylation sites were collected and integrated with reliable predictions to perform large-scale analyses of in situ crosstalks. From the results, we observed that the in situ crosstalk between sulfation and nitration is significantly under-represented, whereas both sulfation and nitration prefer to co-occupy with phosphorylation at same tyrosines. Further analyses suggested that sulfation and nitration preferentially co-occur with phosphorylation at specific positions in proteins, and participate in distinct biological processes and functions. More interestingly, the long-term evolutionary analysis indicated that multi-PTM targeting tyrosines didn't show any higher conservation than singly modified ones. Also, the analysis of human genetic variations demonstrated that there is no additional functional constraint on inherited disease, cancer or rare mutations of multiply modified tyrosines. Taken together, our systematic analyses provided a better understanding of the in situ crosstalk among PTMs. PMID:25476580

  3. Characterization factors for terrestrial acidification at the global scale: a systematic analysis of spatial variability and uncertainty.

    PubMed

    Roy, Pierre-Olivier; Azevedo, Ligia B; Margni, Manuele; van Zelm, Rosalie; Deschênes, Louise; Huijbregts, Mark A J

    2014-12-01

    Characterization factors (CFs) are used in life cycle assessment (LCA) to quantify the potential impact per unit of emission. CFs are obtained from a characterization model which assess the environmental mechanisms along the cause-effect chain linking an emission to its potential damage on a given area of protection, such as loss in ecosystem quality. Up to now, CFs for acidifying emissions did not cover the global scale and were only representative of their characterization model geographical scope. Consequently, current LCA practices implicitly assume that all emissions from a global supply chain occur within the continent referring to the characterization method geographical scope. This paper provides worldwide 2°×2.5° spatially-explicit CFs, representing the change in relative loss of terrestrial vascular plant species due to an emission change of nitrogen oxides (NOx), ammonia (NH3) and sulfur dioxide (SO2). We found that spatial variability in the CFs is much larger compared to statistical uncertainty (six orders of magnitude vs. two orders of magnitude). Spatial variability is mainly caused by the atmospheric fate factor and soil sensitivity factor, while the ecological effect factor is the dominant contributor to the statistical uncertainty. The CFs provided in our study allow the worldwide spatially explicit evaluation of life cycle impacts related to acidifying emissions. This opens the door to evaluate regional life cycle emissions of different products in a global economy.

  4. Impact of contacting study authors to obtain additional data for systematic reviews: diagnostic accuracy studies for hepatic fibrosis

    PubMed Central

    2014-01-01

    Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients

  5. Adenomyomatosis of the gallbladder in childhood: A systematic review of the literature and an additional case report

    PubMed Central

    Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele

    2016-01-01

    AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933

  6. Systematics and limit calculations

    SciTech Connect

    Fisher, Wade; /Fermilab

    2006-12-01

    This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.

  7. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis

    PubMed Central

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395

  8. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis.

    PubMed

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31-2.21), (ii) 5.65; (A-S+, 3.38-9.42), (iii) 8.70 (A+S+, 5.8-13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26-1.77) and the multiplicative index = 0.91 (95% CI = 0.63-1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00-1.28) and 0.51 (95% CI = 0.31-0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits.

  9. Addition of dipeptidyl peptidase-4 inhibitors to sulphonylureas and risk of hypoglycaemia: systematic review and meta-analysis

    PubMed Central

    Moore, Nicholas; Arnaud, Mickael; Robinson, Philip; Raschi, Emanuel; De Ponti, Fabrizio; Bégaud, Bernard; Pariente, Antoine

    2016-01-01

    Objective To quantify the risk of hypoglycaemia associated with the concomitant use of dipeptidyl peptidase-4 (DPP-4) inhibitors and sulphonylureas compared with placebo and sulphonylureas. Design Systematic review and meta-analysis. Data sources Medline, ISI Web of Science, SCOPUS, Cochrane Central Register of Controlled Trials, and clinicaltrial.gov were searched without any language restriction. Study selection Placebo controlled randomised trials comprising at least 50 participants with type 2 diabetes treated with DPP-4 inhibitors and sulphonylureas. Review methods Risk of bias in each trial was assessed using the Cochrane Collaboration tool. The risk ratio of hypoglycaemia with 95% confidence intervals was computed for each study and then pooled using fixed effect models (Mantel Haenszel method) or random effect models, when appropriate. Subgroup analyses were also performed (eg, dose of DPP-4 inhibitors). The number needed to harm (NNH) was estimated according to treatment duration. Results 10 studies were included, representing a total of 6546 participants (4020 received DPP-4 inhibitors plus sulphonylureas, 2526 placebo plus sulphonylureas). The risk ratio of hypoglycaemia was 1.52 (95% confidence interval 1.29 to 1.80). The NNH was 17 (95% confidence interval 11 to 30) for a treatment duration of six months or less, 15 (9 to 26) for 6.1 to 12 months, and 8 (5 to 15) for more than one year. In subgroup analysis, no difference was found between full and low doses of DPP-4 inhibitors: the risk ratio related to full dose DPP-4 inhibitors was 1.66 (1.34 to 2.06), whereas the increased risk ratio related to low dose DPP-4 inhibitors did not reach statistical significance (1.33, 0.92 to 1.94). Conclusions Addition of DPP-4 inhibitors to sulphonylurea to treat people with type 2 diabetes is associated with a 50% increased risk of hypoglycaemia and to one excess case of hypoglycaemia for every 17 patients in the first six months of treatment. This

  10. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a

  11. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  12. Errors and Uncertainty in Physics Measurement.

    ERIC Educational Resources Information Center

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  13. Preventive zinc supplementation for children, and the effect of additional iron: a systematic review and meta-analysis

    PubMed Central

    Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A

    2014-01-01

    Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where

  14. Some Aspects of uncertainty in computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1991-01-01

    Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.

  15. Deriving uncertainty factors for threshold chemical contaminants in drinking water.

    PubMed

    Ritter, Leonard; Totman, Céline; Krishnan, Kannan; Carrier, Richard; Vézina, Anne; Morisset, Véronique

    2007-10-01

    Uncertainty factors are used in the development of drinking-water guidelines to account for uncertainties in the database, including extrapolations of toxicity from animal studies and variability within humans, which result in some uncertainty about risk. The application of uncertainty factors is entrenched in toxicological risk assessment worldwide, but is not applied consistently. This report, prepared in collaboration with Health Canada, provides an assessment of the derivation of the uncertainty factor assumptions used in developing drinking-water quality guidelines for chemical contaminants. Assumptions used by Health Canada in the development of guidelines were compared to several other major regulatory jurisdictions. This assessment has revealed that uncertainty factor assumptions have been substantially influenced by historical practice. While the application of specific uncertainty factors appears to be well entrenched in regulatory practice, a well-documented and disciplined basis for the selection of these factors was not apparent in any of the literature supporting the default assumptions of Canada, the United States, Australia, or the World Health Organization. While there is a basic scheme used in most cases in developing drinking-water quality guidelines for nonthreshold contaminants by the jurisdictions included in this report, additional factors are sometimes included to account for other areas of uncertainty. These factors may include extrapolating subchronic data to anticipated chronic exposure, or use of a LOAEL instead of a NOAEL. The default value attributed to each uncertainty factor is generally a factor of 3 or 10; however, again, no comprehensive guidance to develop and apply these additional uncertainty factors was evident from the literature reviewed. A decision tree has been developed to provide guidance for selection of appropriate uncertainty factors, to account for the range of uncertainty encountered in the risk assessment process

  16. Deriving uncertainty factors for threshold chemical contaminants in drinking water.

    PubMed

    Ritter, Leonard; Totman, Céline; Krishnan, Kannan; Carrier, Richard; Vézina, Anne; Morisset, Véronique

    2007-10-01

    Uncertainty factors are used in the development of drinking-water guidelines to account for uncertainties in the database, including extrapolations of toxicity from animal studies and variability within humans, which result in some uncertainty about risk. The application of uncertainty factors is entrenched in toxicological risk assessment worldwide, but is not applied consistently. This report, prepared in collaboration with Health Canada, provides an assessment of the derivation of the uncertainty factor assumptions used in developing drinking-water quality guidelines for chemical contaminants. Assumptions used by Health Canada in the development of guidelines were compared to several other major regulatory jurisdictions. This assessment has revealed that uncertainty factor assumptions have been substantially influenced by historical practice. While the application of specific uncertainty factors appears to be well entrenched in regulatory practice, a well-documented and disciplined basis for the selection of these factors was not apparent in any of the literature supporting the default assumptions of Canada, the United States, Australia, or the World Health Organization. While there is a basic scheme used in most cases in developing drinking-water quality guidelines for nonthreshold contaminants by the jurisdictions included in this report, additional factors are sometimes included to account for other areas of uncertainty. These factors may include extrapolating subchronic data to anticipated chronic exposure, or use of a LOAEL instead of a NOAEL. The default value attributed to each uncertainty factor is generally a factor of 3 or 10; however, again, no comprehensive guidance to develop and apply these additional uncertainty factors was evident from the literature reviewed. A decision tree has been developed to provide guidance for selection of appropriate uncertainty factors, to account for the range of uncertainty encountered in the risk assessment process

  17. A framework to facilitate consistent characterization of read across uncertainty.

    PubMed

    Blackburn, Karen; Stuard, Sharon B

    2014-04-01

    A process for evaluating analogues for use in structure activity relationship (SAR) assessments was previously published (Wu et al., 2010) and tested using a series of case studies (Blackburn et al., 2011). SAR-based "read across" approaches continue to be broadly used to address toxicological data gaps. The potential additional uncertainty introduced into risk assessments as a result of application of read across approaches to fill data gaps has been widely discussed (OECD, 2007; ECETOC, 2012; Patlewicz et al., 2013), but to date a systematic framework to guide the characterization of uncertainty in read across assessments has not been proposed. The current manuscript presents both a systematic framework to describe potential areas of additional uncertainty that may arise in read across (evaluated based on the number and suitability of analogues contributing data, severity of the critical effect, and effects and potency concordance), as well as a questionnaire for evaluating and documenting consideration of these potential additional sources of uncertainty by risk assessors. Application of this framework represents a next step in standardizing the read across process, both by providing a means to transparently assign a level of uncertainty to a SAR-based read across assessment and by facilitating consistency in read across conclusions drawn by different risk assessors.

  18. Additional use of an aldosterone antagonist in patients with mild to moderate chronic heart failure: a systematic review and meta-analysis

    PubMed Central

    Hu, Li-jun; Chen, Yun-qing; Deng, Song-bai; Du, Jian-lin; She, Qiang

    2013-01-01

    Aims Aldosterone antagonists (AldoAs) have been used to treat severe chronic heart failure (CHF).There is uncertainty regarding the efficacy of using AldoAs in mild to moderate CHF with New York Heart Association (NYHA) classifications of I to II. This study summarizes the evidence for the efficacy of spironolactone (SP), eplerenone (EP) and canrenone in mild to moderate CHF patients. Methods PubMed, MEDLINE, EMBASE and OVID databases were searched before June 2012 for randomized and quasi-randomized controlled trials assessing AldoA treatment in CHF patients with NYHA classes I to II. Data concerning the study's design, patients' characteristics and outcomes were extracted. Risk ratio (RR) and weighted mean differences (WMD) or standardized mean difference were calculated using either fixed or random effects models. Results Eight trials involving 3929 CHF patients were included. AldoAs were superior to the control in all cause mortality (RR 0.79, 95% CI 0.66, 0.95) and in re-hospitalization for cardiac causes (RR 0.62, 95% CI 0.52, 0.74), the left ventricular ejection fraction was improved by AldoA treatment (WMD 2.94%, P = 0.52). Moreover, AldoA therapy decreased the left ventricular end-diastolic volume (WMD −14.04 ml, P < 0.00001),the left ventricular end-systolic volume (WMD −14.09 ml, P < 0.00001). A stratified analysis showed a statistical superiority in the benefits of SP over EP in reducing LVEDV and LVESV. AldoAs reduced B-type natriuretic peptide concentrations (WMD −37.76 pg ml−1, P < 0.00001), increased serum creatinine (WMD 8.69 μmol l−1, P = 0.0003) and occurrence of hyperkalaemia (RR 1.78, 95% CI 1.43, 2.23). Conclusions Additional use of AldoAs in CHF patients may decrease mortality and re-hospitalization for cardiac reasons, improve cardiac function and simultaneously ameliorate LV reverse remodelling. PMID:23088367

  19. The challenges of uncertainty and interprofessional collaboration in palliative care for non-cancer patients in the community: A systematic review of views from patients, carers and health-care professionals

    PubMed Central

    Murtagh, Fliss EM

    2014-01-01

    Background: Primary care has the potential to play significant roles in providing effective palliative care for non-cancer patients. Aim: To identify, critically appraise and synthesise the existing evidence on views on the provision of palliative care for non-cancer patients by primary care providers and reveal any gaps in the evidence. Design: Standard systematic review and narrative synthesis. Data sources: MEDLINE, Embase, CINAHL, PsycINFO, Applied Social Science Abstract and the Cochrane library were searched in 2012. Reference searching, hand searching, expert consultations and grey literature searches complemented these. Papers with the views of patients/carers or professionals on primary palliative care provision to non-cancer patients in the community were included. The amended Hawker’s criteria were used for quality assessment of included studies. Results: A total of 30 studies were included and represent the views of 719 patients, 605 carers and over 400 professionals. In all, 27 studies are from the United Kingdom. Patients and carers expect primary care physicians to provide compassionate care, have appropriate knowledge and play central roles in providing care. The roles of professionals are unclear to patients, carers and professionals themselves. Uncertainty of illness trajectory and lack of collaboration between health-care professionals were identified as barriers to effective care. Conclusions: Effective interprofessional work to deal with uncertainty and maintain coordinated care is needed for better palliative care provision to non-cancer patients in the community. Research into and development of a best model for effective interdisciplinary work are needed. PMID:24821710

  20. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  1. Simplified propagation of standard uncertainties

    SciTech Connect

    Shull, A.H.

    1997-06-09

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards` uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper.

  2. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  3. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. Exploring Uncertainty with Projectile Launchers

    ERIC Educational Resources Information Center

    Orzel, Chad; Reich, Gary; Marr, Jonathan

    2012-01-01

    The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…

  5. Uncertainty quantification for proton-proton fusion in chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Acharya, B.; Carlsson, B. D.; Ekström, A.; Forssén, C.; Platter, L.

    2016-09-01

    We compute the S-factor of the proton-proton (pp) fusion reaction using chiral effective field theory (χEFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the pp cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of χEFT, (iii) the systematic uncertainty due to the χEFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon-nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold S-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent S-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the statistical correlations between the S-factor and few-nucleon observables such as the binding energies and point-proton radii of 2,3H and 3He as well as the D-state probability and quadrupole moment of 2H, and the β-decay of 3H. We find that, with the state-of-the-art optimization of the nuclear Hamiltonian, the statistical uncertainty in the threshold S-factor cannot be reduced beyond 0.7%.

  6. Strategy under uncertainty.

    PubMed

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  7. The need for annual echocardiography to detect cabergoline-associated valvulopathy in patients with prolactinoma: a systematic review and additional clinical data.

    PubMed

    Caputo, Carmela; Prior, David; Inder, Warrick J

    2015-11-01

    Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years.

  8. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  9. Uncertainty analysis

    SciTech Connect

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  10. Asymptotic entropic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Adamczak, Radosław; Latała, Rafał; Puchała, Zbigniew; Życzkowski, Karol

    2016-03-01

    We analyze entropic uncertainty relations for two orthogonal measurements on a N-dimensional Hilbert space, performed in two generic bases. It is assumed that the unitary matrix U relating both bases is distributed according to the Haar measure on the unitary group. We provide lower bounds on the average Shannon entropy of probability distributions related to both measurements. The bounds are stronger than those obtained with use of the entropic uncertainty relation by Maassen and Uffink, and they are optimal up to additive constants. We also analyze the case of a large number of measurements and obtain strong entropic uncertainty relations, which hold with high probability with respect to the random choice of bases. The lower bounds we obtain are optimal up to additive constants and allow us to prove a conjecture by Wehner and Winter on the asymptotic behavior of constants in entropic uncertainty relations as the dimension tends to infinity. As a tool we develop estimates on the maximum operator norm of a submatrix of a fixed size of a random unitary matrix distributed according to the Haar measure, which are of independent interest.

  11. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    PubMed Central

    Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I

    2016-01-01

    Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842

  12. Efficacy of additional psychosocial intervention in reducing low birth weight and preterm birth in teenage pregnancy: A systematic review and meta-analysis.

    PubMed

    Sukhato, Kanokporn; Wongrathanandha, Chathaya; Thakkinstian, Ammarin; Dellow, Alan; Horsuwansak, Pornpot; Anothaisintawee, Thunyarat

    2015-10-01

    This systematic review aimed to assess the efficacy of psychosocial interventions in reducing risk of low birth weight (LBW) and preterm birth (PTB) in teenage pregnancy. Relevant studies were identified from Medline, Scopus, CINAHL, and CENTRAL databases. Randomized controlled trials investigating effect of psychosocial interventions on risk of LBW and PTB, compared to routine antenatal care (ANC) were eligible. Relative risks (RR) of LBW and PTB were pooled using inverse variance method. Mean differences of birth weight (BW) between intervention and control groups were pooled using unstandardized mean difference (USMD). Five studies were included in the review. Compared with routine ANC, psychosocial interventions significantly reduced risk of LBW by 40% (95%CI: 8%,62%) but not for PTB (pooled RR = 0.67, 95%CI: 0.42,1.05). Mean BW of the intervention group was significantly higher than that of the control group with USMD of 200.63 g (95% CI: 21.02, 380.25). Results of our study suggest that psychosocial interventions significantly reduced risk of LBW in teenage pregnancy.

  13. Uncertainties in offsite consequence analysis

    SciTech Connect

    Young, M.L.; Harper, F.T.; Lui, C.H.

    1996-03-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.

  14. Visualizing Flow of Uncertainty through Analytical Processes.

    PubMed

    Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu

    2012-12-01

    Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.

  15. A genetic uncertainty problem.

    PubMed

    Tautz, D

    2000-11-01

    The existence of genes that, when knocked out, result in no obvious phenotype has puzzled biologists for many years. The phenomenon is often ascribed to redundancy in regulatory networks, caused by duplicated genes. However, a recent systematic analysis of data from the yeast genome projects does not support a link between gene duplications and redundancies. An alternative explanation suggests that genes might also evolve by very weak selection, which would mean that their true function cannot be studied in normal laboratory experiments. This problem is comparable to Heisenberg's uncertainty relationship in physics. It is possible to formulate an analogous relationship for biology, which, at its extreme, predicts that the understanding of the full function of a gene might require experiments on an evolutionary scale, involving the entire effective population size of a given species.

  16. Generalized uncertainty relations

    NASA Astrophysics Data System (ADS)

    Akten, Burcu Elif

    1999-12-01

    The Heisenberg uncertainty relation has been put into a stronger form by Schrödinger and Robertson. This inequality is also canonically invariant. We ask if there are other independent inequalities for higher orders. The aim is to find a systematic way for writing these inequalities. After an overview of the Heisenberg and Schrödinger-Robertson inequalities and their minimal states in Chapter 1, we start by constructing the higher order invariants in Chapter 2. We construct some of the simpler invariants by direct calculation, which suggests a schematic way of representing all invariants. Diagrams describing invariants help us see their structure and their symmetries immediately and various simplifications in their calculations are obtained as a result. With these new tools, a more systematic approach to construct and classify invariants using group theory is introduced next. In Chapter 4, various methods of obtaining higher order inequalities are discussed and compared. First, the original approach of HUR is applied to the next order and a new inequality is obtained by working in a specific frame where the expectation value tensor is in its simplest form. However, this method can not be used for higher orders as the significant simplifications of a specific frame is no longer available. The second method consists of working with a state vector written as a sum of the eigenvectors of the operator (qp)s and has a Gaussian distribution about the state which makes s=0 . Finally, we try to obtain a general inequality for a whole class of invariants by writing the state vector as a sum of harmonic oscillator eigenstates. In Chapter 4, realistic measurements of the canonical variables are discussed in relation to the uncertainty relations. Finally, in Chapter 5, squeezed state generation by an optical parametric oscillator is described as an explicit demonstration of the HUR for the electromagnetic field. A similar approach is developed for testing higher order

  17. Efficacy and Safety Assessment of the Addition of Bevacizumab to Adjuvant Therapy Agents in Cancer Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Ahmadizar, Fariba; Onland-Moret, N. Charlotte; de Boer, Anthonius; Liu, Geoffrey; Maitland-van der Zee, Anke H.

    2015-01-01

    Aim To evaluate the efficacy and safety of bevacizumab in the adjuvant cancer therapy setting within different subset of patients. Methods & Design/ Results PubMed, EMBASE, Cochrane and Clinical trials.gov databases were searched for English language studies of randomized controlled trials comparing bevacizumab and adjuvant therapy with adjuvant therapy alone published from January 1966 to 7th of May 2014. Progression free survival, overall survival, overall response rate, safety and quality of life were analyzed using random- or fixed-effects models according to the PRISMA guidelines. We obtained data from 44 randomized controlled trials (30,828 patients). Combining bevacizumab with different adjuvant therapies resulted in significant improvement of progression free survival (log hazard ratio, 0.87; 95% confidence interval (CI), 0.84–0.89), overall survival (log hazard ratio, 0.96; 95% CI, 0.94–0.98) and overall response rate (relative risk, 1.46; 95% CI: 1.33–1.59) compared to adjuvant therapy alone in all studied tumor types. In subgroup analyses, there were no interactions of bevacizumab with baseline characteristics on progression free survival and overall survival, while overall response rate was influenced by tumor type and bevacizumab dose (p-value: 0.02). Although bevacizumab use resulted in additional expected adverse drug reactions except anemia and fatigue, it was not associated with a significant decline in quality of life. There was a trend towards a higher risk of several side effects in patients treated by high-dose bevacizumab compared to the low-dose e.g. all grade proteinuria (9.24; 95% CI: 6.60–12.94 vs. 2.64; 95% CI: 1.29–5.40). Conclusions Combining bevacizumab with different adjuvant therapies provides a survival benefit across all major subsets of patients, including by tumor type, type of adjuvant therapy, and duration and dose of bevacizumab therapy. Though bevacizumab was associated with increased risks of some adverse drug

  18. Uncertainties in radiation flow experiments

    NASA Astrophysics Data System (ADS)

    Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.

    2016-03-01

    Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.

  19. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    SciTech Connect

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip

    2015-04-15

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.

  20. Uncertainty-induced quantum nonlocality

    NASA Astrophysics Data System (ADS)

    Wu, Shao-xiong; Zhang, Jun; Yu, Chang-shui; Song, He-shan

    2014-01-01

    Based on the skew information, we present a quantity, uncertainty-induced quantum nonlocality (UIN) to measure the quantum correlation. It can be considered as the updated version of the original measurement-induced nonlocality (MIN) preserving the good computability but eliminating the non-contractivity problem. For 2×d-dimensional state, it is shown that UIN can be given by a closed form. In addition, we also investigate the maximal uncertainty-induced nonlocality.

  1. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  2. Fractionated Lung IMPT Treatments: Sensitivity to Setup Uncertainties and Motion Effects Based on Single-Field Homogeneity.

    PubMed

    Dowdell, Stephen; Grassberger, Clemens; Sharp, Greg; Paganetti, Harald

    2016-10-01

    . Significantly larger variations were observed in ΔEUD and ΔV95 in IMPTfull lung treatments in addition to higher mean ΔD1-D99 The steep intra-target dose gradients in IMPTfull make it more susceptible to systematic and random setup uncertainties.

  3. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  4. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  5. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  6. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    NASA Technical Reports Server (NTRS)

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  7. Quantifying uncertainty in LCA-modelling of waste management systems

    SciTech Connect

    Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H.

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

  8. Communication and Uncertainty Management.

    ERIC Educational Resources Information Center

    Brashers, Dale E.

    2001-01-01

    Suggests the fundamental challenge for refining theories of communication and uncertainty is to abandon the assumption that uncertainty will produce anxiety. Outlines and extends a theory of uncertainty management and reviews current theory and research. Concludes that people want to reduce uncertainty because it is threatening, but uncertainty…

  9. Uncertainty as knowledge

    PubMed Central

    Lewandowsky, Stephan; Ballard, Timothy; Pancost, Richard D.

    2015-01-01

    This issue of Philosophical Transactions examines the relationship between scientific uncertainty about climate change and knowledge. Uncertainty is an inherent feature of the climate system. Considerable effort has therefore been devoted to understanding how to effectively respond to a changing, yet uncertain climate. Politicians and the public often appeal to uncertainty as an argument to delay mitigative action. We argue that the appropriate response to uncertainty is exactly the opposite: uncertainty provides an impetus to be concerned about climate change, because greater uncertainty increases the risks associated with climate change. We therefore suggest that uncertainty can be a source of actionable knowledge. We survey the papers in this issue, which address the relationship between uncertainty and knowledge from physical, economic and social perspectives. We also summarize the pervasive psychological effects of uncertainty, some of which may militate against a meaningful response to climate change, and we provide pointers to how those difficulties may be ameliorated. PMID:26460108

  10. Neutrino Spectra and Uncertainties for MINOS

    SciTech Connect

    Kopp, Sacha

    2008-02-21

    The MINOS experiment at Fermilab has released an updated result on muon disappearance. The experiment utilizes the intense source of muon neutrinos provided by the NuMI beam line. This note summarizes the systematic uncertainties in the experiment's knowledge of the flux and energy spectrum of the neutrinos from NuMI.

  11. An Approach of Uncertainty Evaluation for Thermal-Hydraulic Analysis

    SciTech Connect

    Katsunori Ogura; Hisashi Ninokata

    2002-07-01

    An approach to evaluate uncertainty systematically for thermal-hydraulic analysis programs is demonstrated. The approach is applied to the Peach Bottom Unit 2 Turbine Trip 2 Benchmark and is validated. (authors)

  12. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  13. Identifying uncertainties in Arctic climate change projections

    NASA Astrophysics Data System (ADS)

    Hodson, Daniel L. R.; Keeley, Sarah P. E.; West, Alex; Ridley, Jeff; Hawkins, Ed; Hewitt, Helene T.

    2013-06-01

    Wide ranging climate changes are expected in the Arctic by the end of the 21st century, but projections of the size of these changes vary widely across current global climate models. This variation represents a large source of uncertainty in our understanding of the evolution of Arctic climate. Here we systematically quantify and assess the model uncertainty in Arctic climate changes in two CO2 doubling experiments: a multimodel ensemble (CMIP3) and an ensemble constructed using a single model (HadCM3) with multiple parameter perturbations (THC-QUMP). These two ensembles allow us to assess the contribution that both structural and parameter variations across models make to the total uncertainty and to begin to attribute sources of uncertainty in projected changes. We find that parameter uncertainty is an major source of uncertainty in certain aspects of Arctic climate. But also that uncertainties in the mean climate state in the 20th century, most notably in the northward Atlantic ocean heat transport and Arctic sea ice volume, are a significant source of uncertainty for projections of future Arctic change. We suggest that better observational constraints on these quantities will lead to significant improvements in the precision of projections of future Arctic climate change.

  14. The meaning of the bias uncertainty measure.

    PubMed

    Bartley, David L

    2008-08-01

    Characterization of measurement uncertainty in terms of root sums of squares of both unknown systematic as well as random error components is given meaning in the sense of prediction intervals. Both types of errors are commonly encountered with industrial hygiene air monitoring of hazardous substances. Two extreme types of measurement methods are presented for illustrating how confidence levels may be ascribed to prediction intervals defined by such uncertainty values. In the case of method calibration at each measurement, systematic error or bias may enter from a biased calibrant. At another extreme, a single initial method evaluation may leave residual bias owing to random error in the evaluation itself or to the use of a biased reference method. Analysis is simplified through new simple approximations to probabilistic limits (quantiles) on the magnitude of a non-central Student t-distributed random variable. Connection is established between traditional confidence limits, accuracy measures in the case of bias minimization and an uncertainty measure.

  15. Fission Spectrum Related Uncertainties

    SciTech Connect

    G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores

    2007-10-01

    The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.

  16. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  17. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    PubMed

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  18. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  19. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  20. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  1. Two basic Uncertainty Relations in Quantum Mechanics

    SciTech Connect

    Angelow, Andrey

    2011-04-07

    In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schroedinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.

  2. Two basic Uncertainty Relations in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angelow, Andrey

    2011-04-01

    In the present article, we discuss two types of uncertainty relations in Quantum Mechanics-multiplicative and additive inequalities for two canonical observables. The multiplicative uncertainty relation was discovered by Heisenberg. Few years later (1930) Erwin Schrödinger has generalized and made it more precise than the original. The additive uncertainty relation is based on the three independent statistical moments in Quantum Mechanics-Cov(q,p), Var(q) and Var(p). We discuss the existing symmetry of both types of relations and applicability of the additive form for the estimation of the total error.

  3. The effectiveness of selected feed and water additives for reducing Salmonella spp. of public health importance in broiler chickens: a systematic review, meta-analysis, and meta-regression approach.

    PubMed

    Totton, Sarah C; Farrar, Ashley M; Wilkins, Wendy; Bucher, Oliver; Waddell, Lisa A; Wilhelm, Barbara J; McEwen, Scott A; Rajić, Andrijana

    2012-10-01

    Eating inappropriately prepared poultry meat is a major cause of foodborne salmonellosis. Our objectives were to determine the efficacy of feed and water additives (other than competitive exclusion and antimicrobials) on reducing Salmonella prevalence or concentration in broiler chickens using systematic review-meta-analysis and to explore sources of heterogeneity found in the meta-analysis through meta-regression. Six electronic databases were searched (Current Contents (1999-2009), Agricola (1924-2009), MEDLINE (1860-2009), Scopus (1960-2009), Centre for Agricultural Bioscience (CAB) (1913-2009), and CAB Global Health (1971-2009)), five topic experts were contacted, and the bibliographies of review articles and a topic-relevant textbook were manually searched to identify all relevant research. Study inclusion criteria comprised: English-language primary research investigating the effects of feed and water additives on the Salmonella prevalence or concentration in broiler chickens. Data extraction and study methodological assessment were conducted by two reviewers independently using pretested forms. Seventy challenge studies (n=910 unique treatment-control comparisons), seven controlled studies (n=154), and one quasi-experiment (n=1) met the inclusion criteria. Compared to an assumed control group prevalence of 44 of 1000 broilers, random-effects meta-analysis indicated that the Salmonella cecal colonization in groups with prebiotics (fructooligosaccharide, lactose, whey, dried milk, lactulose, lactosucrose, sucrose, maltose, mannanoligosaccharide) added to feed or water was 15 out of 1000 broilers; with lactose added to feed or water it was 10 out of 1000 broilers; with experimental chlorate product (ECP) added to feed or water it was 21 out of 1000. For ECP the concentration of Salmonella in the ceca was decreased by 0.61 log(10)cfu/g in the treated group compared to the control group. Significant heterogeneity (Cochran's Q-statistic p≤0.10) was observed

  4. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  5. Systematic error analysis of rotating coil using computer simulation

    SciTech Connect

    Li, Wei-chuan; Coles, M.

    1993-04-01

    This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.

  6. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  7. Uncertainty and cognitive control.

    PubMed

    Mushtaq, Faisal; Bland, Amy R; Schaefer, Alexandre

    2011-01-01

    A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1) There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2) There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3) The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the "need for control"; (4) Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  8. A modular approach to linear uncertainty analysis.

    PubMed

    Weathers, J B; Luck, R; Weathers, J W

    2010-01-01

    This paper introduces a methodology to simplify the uncertainty analysis of large-scale problems where many outputs and/or inputs are of interest. The modular uncertainty technique presented here can be utilized to analyze the results spanning a wide range of engineering problems with constant sensitivities within parameter uncertainty bounds. The proposed modular approach provides the same results as the traditional propagation of errors methodology with fewer conceptual steps allowing for a relatively straightforward implementation of a comprehensive uncertainty analysis effort. The structure of the modular technique allows easy integration into most experimental/modeling programs or data acquisition systems. The proposed methodology also provides correlation information between all outputs, thus providing information not easily obtained using the traditional uncertainty process based on analyzing one data reduction equation (DRE)/model at a time. Finally, the paper presents a straightforward methodology to obtain the covariance matrix for the input variables using uncorrelated elemental sources of systematic uncertainties along with uncorrelated sources corresponding to random uncertainties.

  9. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of

  10. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of

  11. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    SciTech Connect

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  12. Communicating uncertainties in earth sciences in view of user needs

    NASA Astrophysics Data System (ADS)

    de Vries, Wim; Kros, Hans; Heuvelink, Gerard

    2014-05-01

    uncertain model parameters (parametric variability). These uncertainties can be quantified by uncertainty propagation methods such as Monte Carlo simulation methods. Examples of intrinsic uncertainties that generally cannot be expressed in mathematical terms are errors or biases in: • Results of experiments and observations due to inadequate sampling and errors in analyzing data in the laboratory and even in data reporting. • Results of (laboratory) experiments that are limited to a specific domain or performed under circumstances that differ from field circumstances. • Model structure, due to lack of knowledge of the underlying processes. Structural uncertainty, which may cause model inadequacy/ bias, is inherent in model approaches since models are approximations of reality. Intrinsic uncertainties often occur in an emerging field where ongoing new findings, either experiments or field observations of new model findings, challenge earlier work. In this context, climate scientists working within the IPCC have adopted a lexicon to communicate confidence in their findings, ranging from "very high", "high", "medium", "low" and "very low" confidence. In fact, there are also statistical methods to gain insight in uncertainties in model predictions due to model assumptions (i.e. model structural error). Examples are comparing model results with independent observations or a systematic intercomparison of predictions from multiple models. In the latter case, Bayesian model averaging techniques can be used, in which each model considered gets an assigned prior probability of being the 'true' model. This approach works well with statistical (regression) models, but extension to physically-based models is cumbersome. An alternative is the use of state-space models in which structural errors are represent as (additive) noise terms. In this presentation, we focus on approaches that are relevant at the science - policy interface, including multiple scientific disciplines and

  13. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the

  14. Ground-based imaging remote sensing of ice clouds: uncertainties caused by sensor, method and atmosphere

    NASA Astrophysics Data System (ADS)

    Zinner, Tobias; Hausmann, Petra; Ewald, Florian; Bugliaro, Luca; Emde, Claudia; Mayer, Bernhard

    2016-09-01

    In this study a method is introduced for the retrieval of optical thickness and effective particle size of ice clouds over a wide range of optical thickness from ground-based transmitted radiance measurements. Low optical thickness of cirrus clouds and their complex microphysics present a challenge for cloud remote sensing. In transmittance, the relationship between optical depth and radiance is ambiguous. To resolve this ambiguity the retrieval utilizes the spectral slope of radiance between 485 and 560 nm in addition to the commonly employed combination of a visible and a short-wave infrared wavelength.An extensive test of retrieval sensitivity was conducted using synthetic test spectra in which all parameters introducing uncertainty into the retrieval were varied systematically: ice crystal habit and aerosol properties, instrument noise, calibration uncertainty and the interpolation in the lookup table required by the retrieval process. The most important source of errors identified are uncertainties due to habit assumption: Averaged over all test spectra, systematic biases in the effective radius retrieval of several micrometre can arise. The statistical uncertainties of any individual retrieval can easily exceed 10 µm. Optical thickness biases are mostly below 1, while statistical uncertainties are in the range of 1 to 2.5.For demonstration and comparison to satellite data the retrieval is applied to observations by the Munich hyperspectral imager specMACS (spectrometer of the Munich Aerosol and Cloud Scanner) at the Schneefernerhaus observatory (2650 m a.s.l.) during the ACRIDICON-Zugspitze campaign in September and October 2012. Results are compared to MODIS and SEVIRI satellite-based cirrus retrievals (ACRIDICON - Aerosol, Cloud, Precipitation, and Radiation Interactions and Dynamics of Convective Cloud Systems; MODIS - Moderate Resolution Imaging Spectroradiometer; SEVIRI - Spinning Enhanced Visible and Infrared Imager). Considering the identified

  15. The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.

    PubMed

    Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S

    2015-01-01

    The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has

  16. The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits.

    PubMed

    Dankovic, D A; Naumann, B D; Maier, A; Dourson, M L; Levy, L S

    2015-01-01

    The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties-typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also has

  17. [Ethics, empiricism and uncertainty].

    PubMed

    Porz, R; Zimmermann, H; Exadaktylos, A K

    2011-01-01

    Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine.

  18. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  19. Sensitivity and uncertainty investigations for Hiroshima dose estimates and the applicability of the Little Boy mockup measurements

    SciTech Connect

    Bartine, D.E.; Cacuci, D.G.

    1983-09-13

    This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)

  20. Saccade Adaptation and Visual Uncertainty

    PubMed Central

    Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635

  1. Saccade Adaptation and Visual Uncertainty.

    PubMed

    Souto, David; Gegenfurtner, Karl R; Schütz, Alexander C

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty.

  2. Attitudes toward Others Depend upon Self and Other Causal Uncertainty

    PubMed Central

    Tobin, Stephanie J.; Osika, Matylda M.; McLanders, Mia

    2014-01-01

    People who are high in causal uncertainty doubt their own ability to understand the causes of social events. In three studies, we examined the effects of target and perceiver causal uncertainty on attitudes toward the target. Target causal uncertainty was manipulated via responses on a causal uncertainty scale in Studies 1 and 2, and with a scenario in Study 3. In Studies 1 and 2, we found that participants liked the low causal uncertainty target more than the high causal uncertainty target. This preference was stronger for low relative to high causal uncertainty participants because high causal uncertainty participants held more uncertain ideals. In Study 3, we examined the value individuals place upon causal understanding (causal importance) as an additional moderator. We found that regardless of their own causal uncertainty level, participants who were high in causal importance liked the low causal uncertainty target more than the high causal uncertainty target. However, when participants were low in causal importance, low causal uncertainty perceivers showed no preference and high causal uncertainty perceivers preferred the high causal uncertainty target. These findings reveal that goal importance and ideals can influence how perceivers respond to causal uncertainty in others. PMID:24504048

  3. Attitudes toward others depend upon self and other causal uncertainty.

    PubMed

    Tobin, Stephanie J; Osika, Matylda M; McLanders, Mia

    2014-01-01

    People who are high in causal uncertainty doubt their own ability to understand the causes of social events. In three studies, we examined the effects of target and perceiver causal uncertainty on attitudes toward the target. Target causal uncertainty was manipulated via responses on a causal uncertainty scale in Studies 1 and 2, and with a scenario in Study 3. In Studies 1 and 2, we found that participants liked the low causal uncertainty target more than the high causal uncertainty target. This preference was stronger for low relative to high causal uncertainty participants because high causal uncertainty participants held more uncertain ideals. In Study 3, we examined the value individuals place upon causal understanding (causal importance) as an additional moderator. We found that regardless of their own causal uncertainty level, participants who were high in causal importance liked the low causal uncertainty target more than the high causal uncertainty target. However, when participants were low in causal importance, low causal uncertainty perceivers showed no preference and high causal uncertainty perceivers preferred the high causal uncertainty target. These findings reveal that goal importance and ideals can influence how perceivers respond to causal uncertainty in others.

  4. Military veterans with mental health problems: a protocol for a systematic review to identify whether they have an additional risk of contact with criminal justice systems compared with other veterans groups

    PubMed Central

    2012-01-01

    Background There is concern that some veterans of armed forces, in particular those with mental health, drug or alcohol problems, experience difficulty returning to a civilian way of life and may subsequently come into contact with criminal justice services and imprisonment. The aim of this review is to examine whether military veterans with mental health problems, including substance use, have an additional risk of contact with criminal justice systems when compared with veterans who do not have such problems. The review will also seek to identify veterans’ views and experiences on their contact with criminal justice services, what contributed to or influenced their contact and whether there are any differences, including international and temporal, in incidence, contact type, veteran type, their presenting health needs and reported experiences. Methods/design In this review we will adopt a methodological model similar to that previously used by other researchers when reviewing intervention studies. The model, which we will use as a framework for conducting a review of observational and qualitative studies, consists of two parallel synthesis stages within the review process; one for quantitative research and the other for qualitative research. The third stage involves a cross study synthesis, enabling a deeper understanding of the results of the quantitative synthesis. A range of electronic databases, including MEDLINE, PsychINFO, CINAHL, will be systematically searched, from 1939 to present day, using a broad range of search terms that cover four key concepts: mental health, military veterans, substance misuse, and criminal justice. Studies will be screened against topic specific inclusion/exclusion criteria and then against a smaller subset of design specific inclusion/exclusion criteria. Data will be extracted for those studies that meet the inclusion criteria, and all eligible studies will be critically appraised. Included studies, both quantitative and

  5. Deterministic uncertainty analysis

    SciTech Connect

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig.

  6. Economic uncertainty and econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, Christophe

    2009-10-01

    The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.

  7. Physical Uncertainty Bounds (PUB)

    SciTech Connect

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  8. Intolerance of Uncertainty

    PubMed Central

    Beier, Meghan L.

    2015-01-01

    Multiple sclerosis (MS) is a chronic and progressive neurologic condition that, by its nature, carries uncertainty as a hallmark characteristic. Although all patients face uncertainty, there is variability in how individuals cope with its presence. In other populations, the concept of “intolerance of uncertainty” has been conceptualized to explain this variability such that individuals who have difficulty tolerating the possibility of future occurrences may engage in thoughts or behaviors by which they attempt to exert control over that possibility or lessen the uncertainty but may, as a result, experience worse outcomes, particularly in terms of psychological well-being. This topical review introduces MS-focused researchers, clinicians, and patients to intolerance of uncertainty, integrates the concept with what is already understood about coping with MS, and suggests future steps for conceptual, assessment, and treatment-focused research that may benefit from integrating intolerance of uncertainty as a central feature. PMID:26300700

  9. Characterizing spatial uncertainty when integrating social data in conservation planning.

    PubMed

    Lechner, A M; Raymond, C M; Adams, V M; Polyakov, M; Gordon, A; Rhodes, J R; Mills, M; Stein, A; Ives, C D; Lefroy, E C

    2014-12-01

    Recent conservation planning studies have presented approaches for integrating spatially referenced social (SRS) data with a view to improving the feasibility of conservation action. We reviewed the growing conservation literature on SRS data, focusing on elicited or stated preferences derived through social survey methods such as choice experiments and public participation geographic information systems. Elicited SRS data includes the spatial distribution of willingness to sell, willingness to pay, willingness to act, and assessments of social and cultural values. We developed a typology for assessing elicited SRS data uncertainty which describes how social survey uncertainty propagates when projected spatially and the importance of accounting for spatial uncertainty such as scale effects and data quality. These uncertainties will propagate when elicited SRS data is integrated with biophysical data for conservation planning and may have important consequences for assessing the feasibility of conservation actions. To explore this issue further, we conducted a systematic review of the elicited SRS data literature. We found that social survey uncertainty was commonly tested for, but that these uncertainties were ignored when projected spatially. Based on these results we developed a framework which will help researchers and practitioners estimate social survey uncertainty and use these quantitative estimates to systematically address uncertainty within an analysis. This is important when using SRS data in conservation applications because decisions need to be made irrespective of data quality and well characterized uncertainty can be incorporated into decision theoretic approaches.

  10. PIV uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Sciacchitano, Andrea; Wieneke, Bernhard

    2016-08-01

    This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

  11. The Scientific Basis of Uncertainty Factors Used in Setting Occupational Exposure Limits

    PubMed Central

    Dankovic, D. A.; Naumann, B. D.; Maier, A.; Dourson, M. L.; Levy, L. S.

    2015-01-01

    The uncertainty factor concept is integrated into health risk assessments for all aspects of public health practice, including by most organizations that derive occupational exposure limits. The use of uncertainty factors is predicated on the assumption that a sufficient reduction in exposure from those at the boundary for the onset of adverse effects will yield a safe exposure level for at least the great majority of the exposed population, including vulnerable subgroups. There are differences in the application of the uncertainty factor approach among groups that conduct occupational assessments; however, there are common areas of uncertainty which are considered by all or nearly all occupational exposure limit-setting organizations. Five key uncertainties that are often examined include interspecies variability in response when extrapolating from animal studies to humans, response variability in humans, uncertainty in estimating a no-effect level from a dose where effects were observed, extrapolation from shorter duration studies to a full life-time exposure, and other insufficiencies in the overall health effects database indicating that the most sensitive adverse effect may not have been evaluated. In addition, a modifying factor is used by some organizations to account for other remaining uncertainties—typically related to exposure scenarios or accounting for the interplay among the five areas noted above. Consideration of uncertainties in occupational exposure limit derivation is a systematic process whereby the factors applied are not arbitrary, although they are mathematically imprecise. As the scientific basis for uncertainty factor application has improved, default uncertainty factors are now used only in the absence of chemical-specific data, and the trend is to replace them with chemical-specific adjustment factors whenever possible. The increased application of scientific data in the development of uncertainty factors for individual chemicals also

  12. Assessing MODIS Macrophysical Cloud Property Uncertainties

    NASA Astrophysics Data System (ADS)

    Maddux, B. C.; Ackerman, S. A.; Frey, R.; Holz, R.

    2013-12-01

    Cloud, being multifarious and ephemeral, is difficult to observe and quantify in a systematic way. Even basic terminology used to describe cloud observations is fraught with ambiguity in the scientific literature. Any observational technique, method, or platform will contain inherent and unavoidable measurement uncertainties. Quantifying these uncertainties in cloud observations is a complex task that requires an understanding of all aspects of the measurement. We will use cloud observations obtained from the Moderate Resolution Imaging Spectroradiameter(MODIS) to obtain metrics of the uncertainty of its cloud observations. Our uncertainty analyses will contain two main components, 1) an attempt to create a bias or uncertainty with respect to active measurements from CALIPSO and 2) a relative uncertainty within the MODIS cloud climatologies themselves. Our method will link uncertainty to the physical observation and its environmental/scene characteristics. Our aim is to create statistical uncertainties that are based on the cloud observational values, satellite view geometry, surface type, etc, for cloud amount and cloud top pressure. The MODIS instruments on the NASA Terra and Aqua satellites provide observations over a broad spectral range (36 bands between 0.415 and 14.235 micron) and high spatial resolution (250 m for two bands, 500 m for five bands, 1000 m for 29 bands), which the MODIS cloud mask algorithm (MOD35) utilizes to provide clear/cloud determinations over a wide array of surface types, solar illuminations and view geometries. For this study we use the standard MODIS products, MOD03, MOD06 and MOD35, all of which were obtained from the NASA Level 1 and Atmosphere Archive and Distribution System.

  13. Uncertainty Analysis of in situ Ocean Color Radiometry for the Vicarious Calibration of Ocean Color Satellite Sensors

    NASA Astrophysics Data System (ADS)

    Johnson, B.; Clark, D.; Feinholz, M.; Flora, S.; Franz, B.; Houlihan, T.; Mueller, J. A.; Parr, A. C.; Voss, K. J.; Yarbrough, M.

    2011-12-01

    , and describe the planned approach. We describe known sources of systematic bias and discuss our approach for investigating additional potential sources of error.

  14. Uncertainty of upland soil carbon sink estimate for Finland

    NASA Astrophysics Data System (ADS)

    Lehtonen, Aleksi; Heikkinen, Juha

    2016-04-01

    Changes in the soil carbon stock of Finnish upland soils were quantified using forest inventory data, forest statistics, biomass models, litter turnover rates, and the Yasso07 soil model. Uncertainty in the estimated stock changes was assessed by combining model and sampling errors associated with the various data sources into variance-covariance matrices that allowed computationally efficient error propagation in the context of Yasso07 simulations. In sensitivity analysis, we found that the uncertainty increased drastically as a result of adding random year-to-year variation to the litter input. Such variation is smoothed out when using periodic inventory data with constant biomass models and turnover rates. Model errors (biomass, litter, understorey vegetation) and the systematic error of total drain had a marginal effect on the uncertainty regarding soil carbon stock change. Most of the uncertainty appears to be related to uncaptured annual variation in litter amounts. This is due to fact that variation in the slopes of litter input trends dictates the uncertainty of soil carbon stock change. If we assume that there is annual variation only in foliage and fine root litter rates and that this variation is less than 10% from year to year, then we can claim that Finnish upland forest soils have accumulated carbon during the first Kyoto period (2008-2012). The results of the study underline superiority of permanent sample plots compared to temporary ones, when soil model litter input trends have been estimated from forest inventory data. In addition, we also found that the use of IPCC guidelines leads to underestimation of the uncertainty of soil carbon stock change. This underestimation of the error results from the guidance to remove inter-annual variation from the model inputs, here illustrated with constant litter life spans. Model assumptions and model input estimation should be evaluated critically, when GHG-inventory results are used for policy planning

  15. Systematic reviews need systematic searchers

    PubMed Central

    McGowan, Jessie; Sampson, Margaret

    2005-01-01

    Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278

  16. Uncertainty Evaluation of the Diffusive Gradients in Thin Films Technique

    PubMed Central

    2015-01-01

    Although the analytical performance of the diffusive gradients in thin films (DGT) technique is well investigated, there is no systematic analysis of the DGT measurement uncertainty and its sources. In this study we determine the uncertainties of bulk DGT measurements (not considering labile complexes) and of DGT-based chemical imaging using laser ablation - inductively coupled plasma mass spectrometry. We show that under well-controlled experimental conditions the relative combined uncertainties of bulk DGT measurements are ∼10% at a confidence interval of 95%. While several factors considerably contribute to the uncertainty of bulk DGT, the uncertainty of DGT LA-ICP-MS mainly depends on the signal variability of the ablation analysis. The combined uncertainties determined in this study support the use of DGT as a monitoring instrument. It is expected that the analytical requirements of legal frameworks, for example, the EU Drinking Water Directive, are met by DGT sampling. PMID:25579402

  17. Uncertainty evaluation of the diffusive gradients in thin films technique.

    PubMed

    Kreuzeder, Andreas; Santner, Jakob; Zhang, Hao; Prohaska, Thomas; Wenzel, Walter W

    2015-02-01

    Although the analytical performance of the diffusive gradients in thin films (DGT) technique is well investigated, there is no systematic analysis of the DGT measurement uncertainty and its sources. In this study we determine the uncertainties of bulk DGT measurements (not considering labile complexes) and of DGT-based chemical imaging using laser ablation - inductively coupled plasma mass spectrometry. We show that under well-controlled experimental conditions the relative combined uncertainties of bulk DGT measurements are ∼10% at a confidence interval of 95%. While several factors considerably contribute to the uncertainty of bulk DGT, the uncertainty of DGT LA-ICP-MS mainly depends on the signal variability of the ablation analysis. The combined uncertainties determined in this study support the use of DGT as a monitoring instrument. It is expected that the analytical requirements of legal frameworks, for example, the EU Drinking Water Directive, are met by DGT sampling.

  18. Adaptive framework for uncertainty analysis in electromagnetic field measurements.

    PubMed

    Prieto, Javier; Alonso, Alonso A; de la Rosa, Ramón; Carrera, Albano

    2015-04-01

    Misinterpretation of uncertainty in the measurement of the electromagnetic field (EMF) strength may lead to an underestimation of exposure risk or an overestimation of required measurements. The Guide to the Expression of Uncertainty in Measurement (GUM) has internationally been adopted as a de facto standard for uncertainty assessment. However, analyses under such an approach commonly assume unrealistic static models or neglect relevant prior information, resulting in non-robust uncertainties. This study proposes a principled and systematic framework for uncertainty analysis that fuses information from current measurements and prior knowledge. Such a framework dynamically adapts to data by exploiting a likelihood function based on kernel mixtures and incorporates flexible choices of prior information by applying importance sampling. The validity of the proposed techniques is assessed from measurements performed with a broadband radiation meter and an isotropic field probe. The developed framework significantly outperforms GUM approach, achieving a reduction of 28% in measurement uncertainty.

  19. Extended uncertainty from first principles

    NASA Astrophysics Data System (ADS)

    Costa Filho, Raimundo N.; Braga, João P. M.; Lira, Jorge H. S.; Andrade, José S.

    2016-04-01

    A translation operator acting in a space with a diagonal metric is introduced to describe the motion of a particle in a quantum system. We show that the momentum operator and, as a consequence, the uncertainty relation now depend on the metric. It is also shown that, for any metric expanded up to second order, this formalism naturally leads to an extended uncertainty principle (EUP) with a minimum momentum dispersion. The Ehrenfest theorem is modified to include an additional term related to a tidal force arriving from the space curvature introduced by the metric. For one-dimensional systems, we show how to map a harmonic potential to an effective potential in Euclidean space using different metrics.

  20. Individual differences in causal uncertainty.

    PubMed

    Weary, G; Edwards, J A

    1994-08-01

    This article presents a scale that measures chronic individual differences in people's uncertainty about their ability to understand and detect cause-and-effect relationships in the social world: the Causal Uncertainty Scale (CUS). The results of Study 1 indicated that the scale has good internal and adequate test-retest reliability. Additionally, the results of a factor analysis suggested that the scale appears to be tapping a single construct. Study 2 examined the convergent and discriminant validity of the scale, and Studies 3 and 4 examined the predictive and incremental validity of the scale. The importance of the CUS to work on depressives' social information processing and for basic research and theory on human social judgment processes is discussed.

  1. Uncertainty in the multielemental quantification by total-reflection X-ray fluorescence: theoretical and empirical approximation.

    PubMed

    Fernández-Ruiz, R

    2008-11-15

    Nowadays, the subject of the quality assurance of the analytical results is acquiring more and more importance. This work presents a basic theoretical and empirical approximation to the expanded uncertainty associated to the TXRF measurements. Two theoretical models has been proposed and compared systematically with the empirical expanded uncertainty obtained. The main consequences derived of this work are the following; theoretical model B explains with a high degree of agreement the empirical expanded uncertainties associated to the TXRF measurements, while theoretical model A explains partially the instrumental repeatability of the TXRF system. On the other hand, an unexpected U-behavior has been found for the empirical uncertainty in TXRF measurements whose explanation can be due to the sum of several sources of uncertainty not considered like variations of the Compton background or the nonlinearity of the Si(Li) detector quantum efficiency. Additionally, it has been shown that the roughness and small geometrical variations of the sample depositions are the more important uncertainty sources in the experimental TXRF measurements.

  2. On the relationship between aerosol model uncertainty and radiative forcing uncertainty

    PubMed Central

    Reddington, Carly L.; Carslaw, Kenneth S.

    2016-01-01

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple “equifinal” models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model−observation agreement could give a misleading impression of model robustness. PMID:26848136

  3. On the relationship between aerosol model uncertainty and radiative forcing uncertainty.

    PubMed

    Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S

    2016-05-24

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.

  4. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  5. Statistical Uncertainty Analysis Applied to Criticality Calculation

    SciTech Connect

    Hartini, Entin; Andiwijayakusuma, Dinan; Susmikanti, Mike; Nursinta, A. W.

    2010-06-22

    In this paper, we present an uncertainty methodology based on a statistical approach, for assessing uncertainties in criticality prediction using monte carlo method due to uncertainties in the isotopic composition of the fuel. The methodology has been applied to criticality calculations with MCNP5 with additional stochastic input of the isotopic fuel composition. The stochastic input were generated using the latin hypercube sampling method based one the probability density function of each nuclide composition. The automatic passing of the stochastic input to the MCNP and the repeated criticality calculation is made possible by using a python script to link the MCNP and our latin hypercube sampling code.

  6. Extrapolation, uncertainty factors, and the precautionary principle.

    PubMed

    Steel, Daniel

    2011-09-01

    This essay examines the relationship between the precautionary principle and uncertainty factors used by toxicologists to estimate acceptable exposure levels for toxic chemicals from animal experiments. It shows that the adoption of uncertainty factors in the United States in the 1950s can be understood by reference to the precautionary principle, but not by cost-benefit analysis because of a lack of relevant quantitative data at that time. In addition, it argues that uncertainty factors continue to be relevant to efforts to implement the precautionary principle and that the precautionary principle should not be restricted to cases involving unquantifiable hazards.

  7. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  8. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  9. Communicating scientific uncertainty

    PubMed Central

    Fischhoff, Baruch; Davis, Alex L.

    2014-01-01

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science. PMID:25225390

  10. Communicating scientific uncertainty.

    PubMed

    Fischhoff, Baruch; Davis, Alex L

    2014-09-16

    All science has uncertainty. Unless that uncertainty is communicated effectively, decision makers may put too much or too little faith in it. The information that needs to be communicated depends on the decisions that people face. Are they (i) looking for a signal (e.g., whether to evacuate before a hurricane), (ii) choosing among fixed options (e.g., which medical treatment is best), or (iii) learning to create options (e.g., how to regulate nanotechnology)? We examine these three classes of decisions in terms of how to characterize, assess, and convey the uncertainties relevant to each. We then offer a protocol for summarizing the many possible sources of uncertainty in standard terms, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Its goals are better decisions, better science, and better support for science.

  11. Evaluating prediction uncertainty

    SciTech Connect

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.

  12. Conundrums with uncertainty factors.

    PubMed

    Cooke, Roger

    2010-03-01

    The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767

  13. Dasymetric Modeling and Uncertainty

    PubMed Central

    Nagle, Nicholas N.; Buttenfield, Barbara P.; Leyk, Stefan; Speilman, Seth

    2014-01-01

    Dasymetric models increase the spatial resolution of population data by incorporating related ancillary data layers. The role of uncertainty in dasymetric modeling has not been fully addressed as of yet. Uncertainty is usually present because most population data are themselves uncertain, and/or the geographic processes that connect population and the ancillary data layers are not precisely known. A new dasymetric methodology - the Penalized Maximum Entropy Dasymetric Model (P-MEDM) - is presented that enables these sources of uncertainty to be represented and modeled. The P-MEDM propagates uncertainty through the model and yields fine-resolution population estimates with associated measures of uncertainty. This methodology contains a number of other benefits of theoretical and practical interest. In dasymetric modeling, researchers often struggle with identifying a relationship between population and ancillary data layers. The PEDM model simplifies this step by unifying how ancillary data are included. The P-MEDM also allows a rich array of data to be included, with disparate spatial resolutions, attribute resolutions, and uncertainties. While the P-MEDM does not necessarily produce more precise estimates than do existing approaches, it does help to unify how data enter the dasymetric model, it increases the types of data that may be used, and it allows geographers to characterize the quality of their dasymetric estimates. We present an application of the P-MEDM that includes household-level survey data combined with higher spatial resolution data such as from census tracts, block groups, and land cover classifications. PMID:25067846

  14. Uncertainty in QSAR predictions.

    PubMed

    Sahlin, Ullrika

    2013-03-01

    It is relevant to consider uncertainty in individual predictions when quantitative structure-activity (or property) relationships (QSARs) are used to support decisions of high societal concern. Successful communication of uncertainty in the integration of QSARs in chemical safety assessment under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system can be facilitated by a common understanding of how to define, characterise, assess and evaluate uncertainty in QSAR predictions. A QSAR prediction is, compared to experimental estimates, subject to added uncertainty that comes from the use of a model instead of empirically-based estimates. A framework is provided to aid the distinction between different types of uncertainty in a QSAR prediction: quantitative, i.e. for regressions related to the error in a prediction and characterised by a predictive distribution; and qualitative, by expressing our confidence in the model for predicting a particular compound based on a quantitative measure of predictive reliability. It is possible to assess a quantitative (i.e. probabilistic) predictive distribution, given the supervised learning algorithm, the underlying QSAR data, a probability model for uncertainty and a statistical principle for inference. The integration of QSARs into risk assessment may be facilitated by the inclusion of the assessment of predictive error and predictive reliability into the "unambiguous algorithm", as outlined in the second OECD principle.

  15. Multi-thresholds for fault isolation in the presence of uncertainties.

    PubMed

    Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel

    2016-05-01

    Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual.

  16. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  17. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  18. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  19. Atmospheric feedback uncertainty dominates ocean heat uptake uncertainty for the transient climate response

    NASA Astrophysics Data System (ADS)

    MacDougall, Andrew H.; Swart, Neil C.; Knutti, Reto

    2015-04-01

    By absorbing heat and carbon the world ocean acts to slow the transient rate of climate change and to a great extent determines the magnitude of warming given a fixed budget of carbon emissions. The projected magnitude of future ocean heat uptake (OHU) varies substantially between the climate model simulations stored in the CMIP5 archive. In this study analytical and statistical methods, in addition to climate model simulations with an intermediate complexity climate model are used to partition the uncertainty in future OHU in CMIP5 models into uncertainty in radiative forcing, the climate feedback parameter, ocean surface wind fields, and the structure of ocean models. We estimate that if only uncertainty in ocean model structure remained then the uncertainty in OHU would be reduced by 61%, and if only uncertainty in ocean surface wind field remained then OHU uncertainty would be reduced by 87%. The regression method used to simultaneously estimate radiative forcing and the climate feedback parameter from climate model output leaves these parameters with anti-correlated uncertainty. If only uncertainty in radiative forcing and the climate feedback parameter remain then the uncertainty in OHU would be reduced by 9%. These results suggest that most of the uncertainty in OHU seen in CMIP5 models originates in uncertainties in how the atmosphere will respond to anthropogenic increases in greenhouse gas concentrations. Therefore, efforts to improve the representation of the ocean in climate models will have only a limited effect on reducing the uncertainty in the rate of transient climate change unless concurrent improvements are made in constraining atmospheric feedbacks.

  20. Classification images with uncertainty

    PubMed Central

    Tjan, Bosco S.; Nandy, Anirvan S.

    2009-01-01

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477

  1. Classification images with uncertainty.

    PubMed

    Tjan, Bosco S; Nandy, Anirvan S

    2006-04-04

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a "haze" that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms.

  2. Grasping Objects with Environmentally Induced Position Uncertainty

    PubMed Central

    Christopoulos, Vassilios N.; Schrater, Paul R.

    2009-01-01

    Due to noisy motor commands and imprecise and ambiguous sensory information, there is often substantial uncertainty about the relative location between our body and objects in the environment. Little is known about how well people manage and compensate for this uncertainty in purposive movement tasks like grasping. Grasping objects requires reach trajectories to generate object-fingers contacts that permit stable lifting. For objects with position uncertainty, some trajectories are more efficient than others in terms of the probability of producing stable grasps. We hypothesize that people attempt to generate efficient grasp trajectories that produce stable grasps at first contact without requiring post-contact adjustments. We tested this hypothesis by comparing human uncertainty compensation in grasping objects against optimal predictions. Participants grasped and lifted a cylindrical object with position uncertainty, introduced by moving the cylinder with a robotic arm over a sequence of 5 positions sampled from a strongly oriented 2D Gaussian distribution. Preceding each reach, vision of the object was removed for the remainder of the trial and the cylinder was moved one additional time. In accord with optimal predictions, we found that people compensate by aligning the approach direction with covariance angle to maintain grasp efficiency. This compensation results in higher probability to achieve stable grasps at first contact than non-compensation strategies in grasping objects with directional position uncertainty, and the results provide the first demonstration that humans compensate for uncertainty in a complex purposive task. PMID:19834543

  3. Estimating uncertainty of inference for validation

    SciTech Connect

    Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  4. Uncertainty quantification in lattice QCD calculations for nuclear physics

    SciTech Connect

    Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.

    2015-02-05

    The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.

  5. Visualization of Uncertainty

    NASA Astrophysics Data System (ADS)

    Jones, P. W.; Strelitz, R. A.

    2012-12-01

    The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs

  6. Interpreting uncertainty terms.

    PubMed

    Holtgraves, Thomas

    2014-08-01

    Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.

  7. Incorporating climate change into systematic conservation planning

    USGS Publications Warehouse

    Groves, Craig R.; Game, Edward T.; Anderson, Mark G.; Cross, Molly; Enquist, Carolyn; Ferdana, Zach; Girvetz, Evan; Gondor, Anne; Hall, Kimberly R.; Higgins, Jonathan; Marshall, Rob; Popper, Ken; Schill, Steve; Shafer, Sarah L.

    2012-01-01

    The principles of systematic conservation planning are now widely used by governments and non-government organizations alike to develop biodiversity conservation plans for countries, states, regions, and ecoregions. Many of the species and ecosystems these plans were designed to conserve are now being affected by climate change, and there is a critical need to incorporate new and complementary approaches into these plans that will aid species and ecosystems in adjusting to potential climate change impacts. We propose five approaches to climate change adaptation that can be integrated into existing or new biodiversity conservation plans: (1) conserving the geophysical stage, (2) protecting climatic refugia, (3) enhancing regional connectivity, (4) sustaining ecosystem process and function, and (5) capitalizing on opportunities emerging in response to climate change. We discuss both key assumptions behind each approach and the trade-offs involved in using the approach for conservation planning. We also summarize additional data beyond those typically used in systematic conservation plans required to implement these approaches. A major strength of these approaches is that they are largely robust to the uncertainty in how climate impacts may manifest in any given region.

  8. Geometric formulation of the uncertainty principle

    NASA Astrophysics Data System (ADS)

    Bosyk, G. M.; Osán, T. M.; Lamberti, P. W.; Portesi, M.

    2014-03-01

    A geometric approach to formulate the uncertainty principle between quantum observables acting on an N-dimensional Hilbert space is proposed. We consider the fidelity between a density operator associated with a quantum system and a projector associated with an observable, and interpret it as the probability of obtaining the outcome corresponding to that projector. We make use of fidelity-based metrics such as angle, Bures, and root infidelity to propose a measure of uncertainty. The triangle inequality allows us to derive a family of uncertainty relations. In the case of the angle metric, we recover the Landau-Pollak inequality for pure states and show, in a natural way, how to extend it to the case of mixed states in arbitrary dimension. In addition, we derive and compare alternative uncertainty relations when using other known fidelity-based metrics.

  9. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  10. Optimising uncertainty in physical sample preparation.

    PubMed

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2005-11-01

    Uncertainty associated with the result of a measurement can be dominated by the physical sample preparation stage of the measurement process. In view of this, the Optimised Uncertainty (OU) methodology has been further developed to allow the optimisation of the uncertainty from this source, in addition to that from the primary sampling and the subsequent chemical analysis. This new methodology for the optimisation of physical sample preparation uncertainty (u(prep), estimated as s(prep)) is applied for the first time, to a case study of myclobutanil in retail strawberries. An increase in expenditure (+7865%) on the preparatory process was advised in order to reduce the s(prep) by the 69% recommended. This reduction is desirable given the predicted overall saving, under optimised conditions, of 33,000 pounds Sterling per batch. This new methodology has been shown to provide guidance on the appropriate distribution of resources between the three principle stages of a measurement process, including physical sample preparation.

  11. Measurement uncertainty relations

    SciTech Connect

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  12. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  13. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  14. Serenity in political uncertainty.

    PubMed

    Doumit, Rita; Afifi, Rema A; Devon, Holli A

    2015-01-01

    College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930

  15. Serenity in political uncertainty.

    PubMed

    Doumit, Rita; Afifi, Rema A; Devon, Holli A

    2015-01-01

    College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence.

  16. Uncertainty and calibration analysis

    SciTech Connect

    Coutts, D.A.

    1991-03-01

    All measurements contain some deviation from the true value which is being measured. In the common vernacular this deviation between the true value and the measured value is called an inaccuracy, an error, or a mistake. Since all measurements contain errors, it is necessary to accept that there is a limit to how accurate a measurement can be. The undertainty interval combined with the confidence level, is one measure of the accuracy for a measurement or value. Without a statement of uncertainty (or a similar parameter) it is not possible to evaluate if the accuracy of the measurement, or data, is appropriate. The preparation of technical reports, calibration evaluations, and design calculations should consider the accuracy of measurements and data being used. There are many methods to accomplish this. This report provides a consistent method for the handling of measurement tolerances, calibration evaluations and uncertainty calculations. The SRS Quality Assurance (QA) Program requires that the uncertainty of technical data and instrument calibrations be acknowledged and estimated. The QA Program makes some specific technical requirements related to the subject but does not provide a philosophy or method on how uncertainty should be estimated. This report was prepared to provide a technical basis to support the calculation of uncertainties and the calibration of measurement and test equipment for any activity within the Experimental Thermal-Hydraulics (ETH) Group. The methods proposed in this report provide a graded approach for estimating the uncertainty of measurements, data, and calibrations. The method is based on the national consensus standard, ANSI/ASME PTC 19.1.

  17. Application of a Novel Dose-Uncertainty Model for Dose-Uncertainty Analysis in Prostate Intensity-Modulated Radiotherapy

    SciTech Connect

    Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong

    2010-11-01

    Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.

  18. Individuals’ Uncertainty about Future Social Security Benefits and Portfolio Choice

    PubMed Central

    Delavande, Adeline

    2013-01-01

    Summary Little is known about the degree to which individuals are uncertain about their future Social Security benefits, how this varies within the U.S. population, and whether this uncertainty influences financial decisions related to retirement planning. To illuminate these issues, we present empirical evidence from the Health and Retirement Study Internet Survey and document systematic variation in respondents’ uncertainty about their future Social Security benefits by individual characteristics. We find that respondents with higher levels of uncertainty about future benefits hold a smaller share of their wealth in stocks. PMID:23914049

  19. Weighted Uncertainty Relations

    PubMed Central

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming

    2016-01-01

    Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295

  20. The legacy of uncertainty

    NASA Technical Reports Server (NTRS)

    Brown, Laurie M.

    1993-01-01

    An historical account is given of the circumstances whereby the uncertainty relations were introduced into physics by Heisenberg. The criticisms of QED on measurement-theoretical grounds by Landau and Peierls are then discussed, as well as the response to them by Bohr and Rosenfeld. Finally, some examples are given of how the new freedom to advance radical proposals, in part the result of the revolution brought about by 'uncertainty,' was implemented in dealing with the new phenomena encountered in elementary particle physics in the 1930's.

  1. Uncertainties in Hauser-Feshbach Neutron Capture Calculations for Astrophysics

    SciTech Connect

    Bertolli, M.G. Kawano, T.; Little, H.

    2014-06-15

    The calculation of neutron capture cross sections in a statistical Hauser-Feshbach method has proved successful in numerous astrophysical applications. Of increasing interest is the uncertainty associated with the calculated Maxwellian averaged cross sections (MACS). Aspects of a statistical model that introduce a large amount of uncertainty are the level density model, γ-ray strength function parameter, and the placement of E{sub low} – the cut-off energy below which the Hauser-Feshbach method is not applicable. Utilizing the Los Alamos statistical model code CoH3 we investigate the appropriate treatment of these sources of uncertainty via systematics of nuclei in a local region for which experimental or evaluated data is available. In order to show the impact of uncertainty analysis on nuclear data for astrophysical applications, these new uncertainties will be propagated through the nucleosynthesis code NuGrid.

  2. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  3. Measurement uncertainty of adsorption testing of desiccant materials

    SciTech Connect

    Bingham, C E; Pesaran, A A

    1988-12-01

    The technique of measurement uncertainty analysis as described in the current ANSI/ASME standard is applied to the testing of desiccant materials in SERI`s Sorption Test Facility. This paper estimates the elemental precision and systematic errors in these tests and propagates them separately to obtain the resulting uncertainty of the test parameters, including relative humidity ({plus_minus}.03) and sorption capacity ({plus_minus}.002 g/g). Errors generated by instrument calibration, data acquisition, and data reduction are considered. Measurement parameters that would improve the uncertainty of the results are identified. Using the uncertainty in the moisture capacity of a desiccant, the design engineer can estimate the uncertainty in performance of a dehumidifier for desiccant cooling systems with confidence. 6 refs., 2 figs., 8 tabs.

  4. Avoiding climate change uncertainties in Strategic Environmental Assessment

    SciTech Connect

    Larsen, Sanne Vammen; Kørnøv, Lone; Driscoll, Patrick

    2013-11-15

    This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.

  5. Probabilistic accident consequence uncertainty analysis -- Late health effects uncertainty assessment. Volume 1: Main report

    SciTech Connect

    Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models.

  6. Probabilistic accident consequence uncertainty analysis -- Early health effects uncertainty assessment. Volume 1: Main report

    SciTech Connect

    Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.; Grupa, J.B.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models.

  7. ICYESS 2013: Understanding and Interpreting Uncertainty

    NASA Astrophysics Data System (ADS)

    Rauser, F.; Niederdrenk, L.; Schemann, V.; Schmidt, A.; Suesser, D.; Sonntag, S.

    2013-12-01

    We will report the outcomes and highlights of the Interdisciplinary Conference of Young Earth System Scientists (ICYESS) on Understanding and Interpreting Uncertainty in September 2013, Hamburg, Germany. This conference is aimed at early career scientists (Masters to Postdocs) from a large variety of scientific disciplines and backgrounds (natural, social and political sciences) and will enable 3 days of discussions on a variety of uncertainty-related aspects: 1) How do we deal with implicit and explicit uncertainty in our daily scientific work? What is uncertain for us, and for which reasons? 2) How can we communicate these uncertainties to other disciplines? E.g., is uncertainty in cloud parameterization and respectively equilibrium climate sensitivity a concept that is understood equally well in natural and social sciences that deal with Earth System questions? Or vice versa, is, e.g., normative uncertainty as in choosing a discount rate relevant for natural scientists? How can those uncertainties be reconciled? 3) How can science communicate this uncertainty to the public? Is it useful at all? How are the different possible measures of uncertainty understood in different realms of public discourse? Basically, we want to learn from all disciplines that work together in the broad Earth System Science community how to understand and interpret uncertainty - and then transfer this understanding to the problem of how to communicate with the public, or its different layers / agents. ICYESS is structured in a way that participation is only possible via presentation, so every participant will give their own professional input into how the respective disciplines deal with uncertainty. Additionally, a large focus is put onto communication techniques; there are no 'standard presentations' in ICYESS. Keynote lectures by renowned scientists and discussions will lead to a deeper interdisciplinary understanding of what we do not really know, and how to deal with it. Many

  8. Uncertainties in repository modeling

    SciTech Connect

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  9. Reciprocity and uncertainty.

    PubMed

    Bereby-Meyer, Yoella

    2012-02-01

    Guala points to a discrepancy between strong negative reciprocity observed in the lab and the way cooperation is sustained "in the wild." This commentary suggests that in lab experiments, strong negative reciprocity is limited when uncertainty exists regarding the players' actions and the intentions. Thus, costly punishment is indeed a limited mechanism for sustaining cooperation in an uncertain environment.

  10. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.

  11. Reciprocity and uncertainty.

    PubMed

    Bereby-Meyer, Yoella

    2012-02-01

    Guala points to a discrepancy between strong negative reciprocity observed in the lab and the way cooperation is sustained "in the wild." This commentary suggests that in lab experiments, strong negative reciprocity is limited when uncertainty exists regarding the players' actions and the intentions. Thus, costly punishment is indeed a limited mechanism for sustaining cooperation in an uncertain environment. PMID:22289307

  12. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    SciTech Connect

    Langenbrunner, James R.; Booker, Jane M; Hemez, Francois M; Salazar, Issac F; Ross, Timothy J

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  13. Quantification and Propagation of Nuclear Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Rising, Michael E.

    The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output

  14. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  15. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  16. The uncertainty of UTCI due to uncertainties in the determination of radiation fluxes derived from measured and observed meteorological data

    NASA Astrophysics Data System (ADS)

    Weihs, Philipp; Staiger, Henning; Tinz, Birger; Batchvarova, Ekaterina; Rieder, Harald; Vuilleumier, Laurent; Maturilli, Marion; Jendritzky, Gerd

    2012-05-01

    In the present study, we investigate the determination accuracy of the Universal Thermal Climate Index (UTCI). We study especially the UTCI uncertainties due to uncertainties in radiation fluxes, whose impacts on UTCI are evaluated via the mean radiant temperature ( Tmrt). We assume "normal conditions", which means that usual meteorological information and data are available but no special additional measurements. First, the uncertainty arising only from the measurement uncertainties of the meteorological data is determined. Here, simulations show that uncertainties between 0.4 and 2 K due to the uncertainty of just one of the meteorological input parameters may be expected. We then analyse the determination accuracy when not all radiation data are available and modelling of the missing data is required. Since radiative transfer models require a lot of information that is usually not available, we concentrate only on the determination accuracy achievable with empirical models. The simulations show that uncertainties in the calculation of the diffuse irradiance may lead to Tmrt uncertainties of up to ±2.9 K. If long-wave radiation is missing, we may expect an uncertainty of ±2 K. If modelling of diffuse radiation and of longwave radiation is used for the calculation of Tmrt, we may then expect a determination uncertainty of ±3 K. If all radiative fluxes are modelled based on synoptic observation, the uncertainty in Tmrt is ±5.9 K. Because Tmrt is only one of the four input data required in the calculation of UTCI, the uncertainty in UTCI due to the uncertainty in radiation fluxes is less than ±2 K. The UTCI uncertainties due to uncertainties of the four meteorological input values are not larger than the 6 K reference intervals of the UTCI scale, which means that UTCI may only be wrong by one UTCI scale. This uncertainty may, however, be critical at the two temperature extremes, i.e. under extreme hot or extreme cold conditions.

  17. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  18. Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.

    PubMed

    Meyer, Veronika R

    2003-09-01

    Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.

  19. Declarative representation of uncertainty in mathematical models.

    PubMed

    Miller, Andrew K; Britten, Randall D; Nielsen, Poul M F

    2012-01-01

    An important aspect of multi-scale modelling is the ability to represent mathematical models in forms that can be exchanged between modellers and tools. While the development of languages like CellML and SBML have provided standardised declarative exchange formats for mathematical models, independent of the algorithm to be applied to the model, to date these standards have not provided a clear mechanism for describing parameter uncertainty. Parameter uncertainty is an inherent feature of many real systems. This uncertainty can result from a number of situations, such as: when measurements include inherent error; when parameters have unknown values and so are replaced by a probability distribution by the modeller; when a model is of an individual from a population, and parameters have unknown values for the individual, but the distribution for the population is known. We present and demonstrate an approach by which uncertainty can be described declaratively in CellML models, by utilising the extension mechanisms provided in CellML. Parameter uncertainty can be described declaratively in terms of either a univariate continuous probability density function or multiple realisations of one variable or several (typically non-independent) variables. We additionally present an extension to SED-ML (the Simulation Experiment Description Markup Language) to describe sampling sensitivity analysis simulation experiments. We demonstrate the usability of the approach by encoding a sample model in the uncertainty markup language, and by developing a software implementation of the uncertainty specification (including the SED-ML extension for sampling sensitivty analyses) in an existing CellML software library, the CellML API implementation. We used the software implementation to run sampling sensitivity analyses over the model to demonstrate that it is possible to run useful simulations on models with uncertainty encoded in this form.

  20. Declarative Representation of Uncertainty in Mathematical Models

    PubMed Central

    Miller, Andrew K.; Britten, Randall D.; Nielsen, Poul M. F.

    2012-01-01

    An important aspect of multi-scale modelling is the ability to represent mathematical models in forms that can be exchanged between modellers and tools. While the development of languages like CellML and SBML have provided standardised declarative exchange formats for mathematical models, independent of the algorithm to be applied to the model, to date these standards have not provided a clear mechanism for describing parameter uncertainty. Parameter uncertainty is an inherent feature of many real systems. This uncertainty can result from a number of situations, such as: when measurements include inherent error; when parameters have unknown values and so are replaced by a probability distribution by the modeller; when a model is of an individual from a population, and parameters have unknown values for the individual, but the distribution for the population is known. We present and demonstrate an approach by which uncertainty can be described declaratively in CellML models, by utilising the extension mechanisms provided in CellML. Parameter uncertainty can be described declaratively in terms of either a univariate continuous probability density function or multiple realisations of one variable or several (typically non-independent) variables. We additionally present an extension to SED-ML (the Simulation Experiment Description Markup Language) to describe sampling sensitivity analysis simulation experiments. We demonstrate the usability of the approach by encoding a sample model in the uncertainty markup language, and by developing a software implementation of the uncertainty specification (including the SED-ML extension for sampling sensitivty analyses) in an existing CellML software library, the CellML API implementation. We used the software implementation to run sampling sensitivity analyses over the model to demonstrate that it is possible to run useful simulations on models with uncertainty encoded in this form. PMID:22802941

  1. Uncertainty of Pyrometers in a Casting Facility

    SciTech Connect

    Mee, D.K.; Elkins, J.E.; Fleenor, R.M.; Morrision, J.M.; Sherrill, M.W.; Seiber, L.E.

    2001-12-07

    This work has established uncertainty limits for the EUO filament pyrometers, digital pyrometers, two-color automatic pyrometers, and the standards used to certify these instruments (Table 1). If symmetrical limits are used, filament pyrometers calibrated in Production have certification uncertainties of not more than {+-}20.5 C traceable to NIST over the certification period. Uncertainties of these pyrometers were roughly {+-}14.7 C before introduction of the working standard that allowed certification in the field. Digital pyrometers addressed in this report have symmetrical uncertainties of not more than {+-}12.7 C or {+-}18.1 C when certified on a Y-12 Standards Laboratory strip lamp or in a production area tube furnace, respectively. Uncertainty estimates for automatic two-color pyrometers certified in Production are {+-}16.7 C. Additional uncertainty and bias are introduced when measuring production melt temperatures. A -19.4 C bias was measured in a large 1987 data set which is believed to be caused primarily by use of Pyrex{trademark} windows (not present in current configuration) and window fogging. Large variability (2{sigma} = 28.6 C) exists in the first 10 m of the hold period. This variability is attributed to emissivity variation across the melt and reflection from hot surfaces. For runs with hold periods extending to 20 m, the uncertainty approaches the calibration uncertainty of the pyrometers. When certifying pyrometers on a strip lamp at the Y-12 Standards Laboratory, it is important to limit ambient temperature variation (23{+-}4 C), to order calibration points from high to low temperatures, to allow 6 m for the lamp to reach thermal equilibrium (12 m for certifications below 1200 C) to minimize pyrometer bias, and to calibrate the pyrometer if error exceeds vendor specifications. A procedure has been written to assure conformance.

  2. Uncertainty Modeling for Structural Control Analysis and Synthesis

    NASA Technical Reports Server (NTRS)

    Campbell, Mark E.; Crawley, Edward F.

    1996-01-01

    The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.

  3. Systematic Effects in Atomic Fountain Clocks

    NASA Astrophysics Data System (ADS)

    Gibble, Kurt

    2016-06-01

    We describe recent advances in the accuracies of atomic fountain clocks. New rigorous treatments of the previously large systematic uncertainties, distributed cavity phase, microwave lensing, and background gas collisions, enabled these advances. We also discuss background gas collisions of optical lattice and ion clocks and derive the smooth transition of the microwave lensing frequency shift to photon recoil shifts for large atomic wave packets.

  4. Where does the uncertainty come from? Attributing Uncertainty in Conceptual Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Abu Shoaib, S.; Marshall, L. A.; Sharma, A.

    2015-12-01

    Defining an appropriate forecasting model is a key phase in water resources planning and design. Quantification of uncertainty is an important step in the development and application of hydrologic models. In this study, we examine the dependency of hydrologic model uncertainty on the observed model inputs, defined model structure, parameter optimization identifiability and identified likelihood. We present here a new uncertainty metric, the Quantile Flow Deviation or QFD, to evaluate the relative uncertainty due to each of these sources under a range of catchment conditions. Through the metric, we may identify the potential spectrum of uncertainty and variability in model simulations. The QFD assesses uncertainty by estimating the deviation in flows at a given quantile across a range of scenarios. By using a quantile based metric, the change in uncertainty across individual percentiles can be assessed, thereby allowing uncertainty to be expressed as a function of time. The QFD method can be disaggregated to examine any part of the modelling process including the selection of certain model subroutines or forcing data. Case study results (including catchments in Australia and USA) suggest that model structure selection is vital irrespective of the flow percentile of interest or the catchment being studied. Examining the QFD across various quantiles additionally demonstrates that lower yielding catchments may have greater variation due to selected model structures. By incorporating multiple model structures, it is possible to assess (i) the relative importance of various sources of uncertainty, (ii) how these vary with the change in catchment location or hydrologic regime; and (iii) the impact of the length of available observations in uncertainty quantification.

  5. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    SciTech Connect

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.

  6. Majorization entropic uncertainty relations

    NASA Astrophysics Data System (ADS)

    Puchała, Zbigniew; Rudnicki, Łukasz; Życzkowski, Karol

    2013-07-01

    Entropic uncertainty relations in a finite-dimensional Hilbert space are investigated. Making use of the majorization technique we derive explicit lower bounds for the sum of Rényi entropies describing probability distributions associated with a given pure state expanded in eigenbases of two observables. Obtained bounds are expressed in terms of the largest singular values of submatrices of the unitary rotation matrix. Numerical simulations show that for a generic unitary matrix of size N = 5, our bound is stronger than the well-known result of Maassen and Uffink (MU) with a probability larger than 98%. We also show that the bounds investigated are invariant under the dephasing and permutation operations. Finally, we derive a classical analogue of the MU uncertainty relation, which is formulated for stochastic transition matrices. Dedicated to Iwo Białynicki-Birula on the occasion of his 80th birthday.

  7. Uncertainties in transpiration estimates.

    PubMed

    Coenders-Gerrits, A M J; van der Ent, R J; Bogaard, T A; Wang-Erlandsson, L; Hrachowitz, M; Savenije, H H G

    2014-02-13

    arising from S. Jasechko et al. Nature 496, 347-350 (2013)10.1038/nature11983How best to assess the respective importance of plant transpiration over evaporation from open waters, soils and short-term storage such as tree canopies and understories (interception) has long been debated. On the basis of data from lake catchments, Jasechko et al. conclude that transpiration accounts for 80-90% of total land evaporation globally (Fig. 1a). However, another choice of input data, together with more conservative accounting of the related uncertainties, reduces and widens the transpiration ratio estimation to 35-80%. Hence, climate models do not necessarily conflict with observations, but more measurements on the catchment scale are needed to reduce the uncertainty range. There is a Reply to this Brief Communications Arising by Jasechko, S. et al. Nature 506, http://dx.doi.org/10.1038/nature12926 (2014).

  8. Radar stage uncertainty

    USGS Publications Warehouse

    Fulford, J.M.; Davies, W.J.

    2005-01-01

    The U.S. Geological Survey is investigating the performance of radars used for stage (or water-level) measurement. This paper presents a comparison of estimated uncertainties and data for radar water-level measurements with float, bubbler, and wire weight water-level measurements. The radar sensor was also temperature-tested in a laboratory. The uncertainty estimates indicate that radar measurements are more accurate than uncorrected pressure sensors at higher water stages, but are less accurate than pressure sensors at low stages. Field data at two sites indicate that radar sensors may have a small negative bias. Comparison of field radar measurements with wire weight measurements found that the radar tends to measure slightly lower values as stage increases. Copyright ASCE 2005.

  9. Uncertainties in climate stabilization

    SciTech Connect

    Wigley, T. M.; Clarke, Leon E.; Edmonds, James A.; Jacoby, H. D.; Paltsev, S.; Pitcher, Hugh M.; Reilly, J. M.; Richels, Richard G.; Sarofim, M. C.; Smith, Steven J.

    2009-11-01

    We explore the atmospheric composition, temperature and sea level implications of new reference and cost-optimized stabilization emissions scenarios produced using three different Integrated Assessment (IA) models for U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 2.1a. We also consider an extension of one of these sets of scenarios out to 2300. Stabilization is defined in terms of radiative forcing targets for the sum of gases potentially controlled under the Kyoto Protocol. For the most stringent stabilization case (“Level 1” with CO2 concentration stabilizing at about 450 ppm), peak CO2 emissions occur close to today, implying a need for immediate CO2 emissions abatement if we wish to stabilize at this level. In the extended reference case, CO2 stabilizes at 1000 ppm in 2200 – but even to achieve this target requires large and rapid CO2 emissions reductions over the 22nd century. Future temperature changes for the Level 1 stabilization case show considerable uncertainty even when a common set of climate model parameters is used (a result of different assumptions for non-Kyoto gases). Uncertainties are about a factor of three when climate sensitivity uncertainties are accounted for. We estimate the probability that warming from pre-industrial times will be less than 2oC to be about 50%. For one of the IA models, warming in the Level 1 case is greater out to 2050 than in the reference case, due to the effect of decreasing SO2 emissions that occur as a side effect of the policy-driven reduction in CO2 emissions. Sea level rise uncertainties for the Level 1 case are very large, with increases ranging from 12 to 100 cm over 2000 to 2300.

  10. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  11. Uncertainty quantification in nanomechanical measurements using the atomic force microscope.

    PubMed

    Wagner, Ryan; Moon, Robert; Pratt, Jon; Shaw, Gordon; Raman, Arvind

    2011-11-11

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale resolution of both inorganic and biological surfaces and nanomaterials. We present a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements. We demonstrate the framework by quantifying uncertainty in AFM-based measurements of the transverse elastic modulus of cellulose nanocrystals (CNCs), an abundant, plant-derived nanomaterial whose mechanical properties are comparable to Kevlar fibers. For a single, isolated CNC the transverse elastic modulus was found to have a mean of 8.1 GPa and a 95% confidence interval of 2.7-20 GPa. A key result is that multiple replicates of force-distance curves do not sample the important sources of uncertainty, which are systematic in nature. The dominant source of uncertainty is the nondimensional photodiode sensitivity calibration rather than the cantilever stiffness or Z-piezo calibrations. The results underscore the great need for, and open a path towards, quantifying and minimizing uncertainty in AFM-based material property measurements of nanoparticles, nanostructured surfaces, thin films, polymers and biomaterials. PMID:21992899

  12. Uncertainty quantification in nanomechanical measurements using the atomic force microscope.

    PubMed

    Wagner, Ryan; Moon, Robert; Pratt, Jon; Shaw, Gordon; Raman, Arvind

    2011-11-11

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale resolution of both inorganic and biological surfaces and nanomaterials. We present a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements. We demonstrate the framework by quantifying uncertainty in AFM-based measurements of the transverse elastic modulus of cellulose nanocrystals (CNCs), an abundant, plant-derived nanomaterial whose mechanical properties are comparable to Kevlar fibers. For a single, isolated CNC the transverse elastic modulus was found to have a mean of 8.1 GPa and a 95% confidence interval of 2.7-20 GPa. A key result is that multiple replicates of force-distance curves do not sample the important sources of uncertainty, which are systematic in nature. The dominant source of uncertainty is the nondimensional photodiode sensitivity calibration rather than the cantilever stiffness or Z-piezo calibrations. The results underscore the great need for, and open a path towards, quantifying and minimizing uncertainty in AFM-based material property measurements of nanoparticles, nanostructured surfaces, thin films, polymers and biomaterials.

  13. A generalized a priori dose uncertainty model of IMRT delivery.

    PubMed

    Jin, Hosang; Palta, Jatinder; Suh, Tae-Suk; Kim, Siyong

    2008-03-01

    Multileaf collimator-based intensity modulated radiation therapy (IMRT) is complex because each intensity modulated field consists of hundreds of subfields, each of which is associated with an intricate interplay of uncertainties. In this study, the authors have revised the previously introduced uncertainty model to provide an a priori accurate prediction of dose uncertainty during treatment planning in IMRT. In the previous model, the dose uncertainties were categorized into space-oriented dose uncertainty (SOU) and nonspace-oriented dose uncertainty (NOU). The revised model further divided the uncertainty sources into planning and delivery. SOU and NOU associated with a planning system were defined as inherent dose uncertainty. A convolution method with seven degrees of freedom was also newly applied to generalize the model for practical clinical cases. The model parameters were quantified through a set of measurements, accumulated routine quality assurance (QA) data, and peer-reviewed publications. The predicted uncertainty maps were compared with dose difference distributions between computations and 108 simple open-field measurements using a two-dimensional diode array detector to verify the validity of the model parameters and robustness of the generalized model. To examine the applicability of the model to overall dose uncertainty prediction in IMRT, a retrospective analysis of QA measurements using the diode array detector for 32 clinical IM fields was also performed. A scatter diagram and a correlation coefficient were employed to investigate a correlation of the predicted dose uncertainty distribution with the dose discrepancy distribution between calculation and delivery. In addition, a gamma test was performed to correlate failed regions in dose verification with the dose uncertainty map. The quantified model parameters well correlated the predicted dose uncertainty with the probable dose difference between calculations and measurements. It was visually

  14. Uncertainty Propagation with Fast Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.

    2014-04-01

    Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.

  15. SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy

    SciTech Connect

    Stroom, J; Vieira, S; Greco, C

    2014-06-01

    Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coverage was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are

  16. SIP: Systematics-Insensitive Periodograms

    NASA Astrophysics Data System (ADS)

    Angus, Ruth

    2016-09-01

    SIP (Systematics-Insensitive Periodograms) extends the generative model used to create traditional sine-fitting periodograms for finding the frequency of a sinusoid by including systematic trends based on a set of eigen light curves in the generative model in addition to using a sum of sine and cosine functions over a grid of frequencies, producing periodograms with vastly reduced systematic features. Acoustic oscillations in giant stars and measurement of stellar rotation periods can be recovered from the SIP periodograms without detrending. The code can also be applied to detection other periodic phenomena, including eclipsing binaries and short-period exoplanet candidates.

  17. Uncertainties drive arsenic rule delay

    SciTech Connect

    Pontius, F.W.

    1995-04-01

    The US Environmental Protection Agency (USEPA) is under court order to sign a proposed rule for arsenic by Nov. 30, 1995. The agency recently announced that it will not meet this deadline, citing the need to gather additional information. Development of a National Interim Primary Drinking Water Regulation for arsenic has been delayed several times over the past 10 years because of uncertainties regarding health issues and costs associated with compliance. The early history of development of the arsenic rule has been reviewed. Only recent developments are reviewed here. The current maximum contaminant level (MCL) for arsenic in drinking water is 0.05 mg/L. This MCL was set in 1975, based on the 1962 US Public Health Standards. The current Safe Drinking Water Act (SDWA) requires that the revised arsenic MCL be set as close to the MCL goal (MCLG) as is feasible using best technology, treatment techniques, or other means and taking cost into consideration.

  18. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  19. Uncertainties in global ocean surface heat flux climatologies derived from ship observations

    SciTech Connect

    Gleckler, P.J.; Weare, B.C.

    1995-08-01

    A methodology to define uncertainties associated with ocean surface heat flux calculations has been developed and applied to a revised version of the Oberhuber global climatology, which utilizes a summary of the COADS surface observations. Systematic and random uncertainties in the net oceanic heat flux and each of its four components at individual grid points and for zonal averages have been estimated for each calendar month and the annual mean. The most important uncertainties of the 2{degree} x 2{degree} grid cell values of each of the heat fluxes are described. Annual mean net shortwave flux random uncertainties associated with errors in estimating cloud cover in the tropics yield total uncertainties which are greater than 25 W m{sup {minus}2}. In the northern latitudes, where the large number of observations substantially reduce the influence of these random errors, the systematic uncertainties in the utilized parameterization are largely responsible for total uncertainties in the shortwave fluxes which usually remain greater than 10 W m{sup {minus}2}. Systematic uncertainties dominate in the zonal means because spatial averaging has led to a further reduction of the random errors. The situation for the annual mean latent heat flux is somewhat different in that even for grid point values the contributions of the systematic uncertainties tend to be larger than those of the random uncertainties at most all latitudes. Latent heat flux uncertainties are greater than 20 W m{sup {minus}2} nearly everywhere south of 40{degree}N, and in excess of 30 W m{sup {minus}2} over broad areas of the subtropics, even those with large numbers of observations. Resulting zonal mean latent heat flux uncertainties are largest ({approximately}30 W m{sup {minus}2}) in the middle latitudes and subtropics and smallest ({approximately}10--25 W m{sup {minus}2}) near the equator and over the northernmost regions.

  20. Sources of uncertainty in intuitive physics.

    PubMed

    Smith, Kevin A; Vul, Edward

    2013-01-01

    Recent work suggests that people predict how objects interact in a manner consistent with Newtonian physics, but with additional uncertainty. However, the sources of uncertainty have not been examined. In this study, we measure perceptual noise in initial conditions and stochasticity in the physical model used to make predictions. Participants predicted the trajectory of a moving object through occluded motion and bounces, and we compared their behavior to an ideal observer model. We found that human judgments cannot be captured by simple heuristics and must incorporate noisy dynamics. Moreover, these judgments are biased consistently with a prior expectation on object destinations, suggesting that people use simple expectations about outcomes to compensate for uncertainty about their physical models.

  1. Sources of uncertainty in intuitive physics.

    PubMed

    Smith, Kevin A; Vul, Edward

    2013-01-01

    Recent work suggests that people predict how objects interact in a manner consistent with Newtonian physics, but with additional uncertainty. However, the sources of uncertainty have not been examined. In this study, we measure perceptual noise in initial conditions and stochasticity in the physical model used to make predictions. Participants predicted the trajectory of a moving object through occluded motion and bounces, and we compared their behavior to an ideal observer model. We found that human judgments cannot be captured by simple heuristics and must incorporate noisy dynamics. Moreover, these judgments are biased consistently with a prior expectation on object destinations, suggesting that people use simple expectations about outcomes to compensate for uncertainty about their physical models. PMID:23335579

  2. Thyroid disrupting chemicals in plastic additives and thyroid health.

    PubMed

    Andra, Syam S; Makris, Konstantinos C

    2012-01-01

    The globally escalating thyroid nodule incidence rates may be only partially ascribed to better diagnostics, allowing for the assessment of environmental risk factors on thyroid disease. Endocrine disruptors or thyroid-disrupting chemicals (TDC) like bisphenol A, phthalates, and polybrominated diphenyl ethers are widely used as plastic additives in consumer products. This comprehensive review studied the magnitude and uncertainty of TDC exposures and their effects on thyroid hormones for sensitive subpopulation groups like pregnant women, infants, and children. Our findings qualitatively suggest the mixed, significant (α = 0.05) TDC associations with natural thyroid hormones (positive or negative sign). Future studies should undertake systematic meta-analyses to elucidate pooled TDC effect estimates on thyroid health indicators and outcomes. PMID:22690712

  3. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in

  4. Analysis of automated highway system risks and uncertainties. Volume 5

    SciTech Connect

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  5. Collective uncertainty entanglement test.

    PubMed

    Rudnicki, Łukasz; Horodecki, Paweł; Zyczkowski, Karol

    2011-10-01

    For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.

  6. Collective Uncertainty Entanglement Test

    NASA Astrophysics Data System (ADS)

    Rudnicki, Łukasz; Horodecki, Paweł; Życzkowski, Karol

    2011-10-01

    For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.

  7. Schwarzschild mass uncertainty

    NASA Astrophysics Data System (ADS)

    Davidson, Aharon; Yellin, Ben

    2014-02-01

    Applying Dirac's procedure to -dependent constrained systems, we derive a reduced total Hamiltonian, resembling an upside down harmonic oscillator, which generates the Schwarzschild solution in the mini super-spacetime. Associated with the now -dependent Schrodinger equation is a tower of localized Guth-Pi-Barton wave packets, orthonormal and non-singular, admitting equally spaced average-`energy' levels. Our approach is characterized by a universal quantum mechanical uncertainty structure which enters the game already at the flat spacetime level, and accompanies the massive Schwarzschild sector for any arbitrary mean mass. The average black hole horizon surface area is linearly quantized.

  8. Fundamental "Uncertainty" in Science

    NASA Astrophysics Data System (ADS)

    Reichl, Linda E.

    The conference on "Uncertainty and Surprise" was concerned with our fundamental inability to predict future events. How can we restructure organizations to effectively function in an uncertain environment? One concern is that many large complex organizations are built on mechanical models, but mechanical models cannot always respond well to "surprises." An underlying assumption a bout mechanical models is that, if we give them enough information about the world, they will know the future accurately enough that there will be few or no surprises. The assumption is that the future is basically predictable and deterministic.

  9. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi

  10. Satellite altitude determination uncertainties

    NASA Technical Reports Server (NTRS)

    Siry, J. W.

    1972-01-01

    Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.

  11. Addressing uncertainty in adaptation planning for agriculture.

    PubMed

    Vermeulen, Sonja J; Challinor, Andrew J; Thornton, Philip K; Campbell, Bruce M; Eriyagama, Nishadi; Vervoort, Joost M; Kinyangi, James; Jarvis, Andy; Läderach, Peter; Ramirez-Villegas, Julian; Nicklin, Kathryn J; Hawkins, Ed; Smith, Daniel R

    2013-05-21

    We present a framework for prioritizing adaptation approaches at a range of timeframes. The framework is illustrated by four case studies from developing countries, each with associated characterization of uncertainty. Two cases on near-term adaptation planning in Sri Lanka and on stakeholder scenario exercises in East Africa show how the relative utility of capacity vs. impact approaches to adaptation planning differ with level of uncertainty and associated lead time. An additional two cases demonstrate that it is possible to identify uncertainties that are relevant to decision making in specific timeframes and circumstances. The case on coffee in Latin America identifies altitudinal thresholds at which incremental vs. transformative adaptation pathways are robust options. The final case uses three crop-climate simulation studies to demonstrate how uncertainty can be characterized at different time horizons to discriminate where robust adaptation options are possible. We find that impact approaches, which use predictive models, are increasingly useful over longer lead times and at higher levels of greenhouse gas emissions. We also find that extreme events are important in determining predictability across a broad range of timescales. The results demonstrate the potential for robust knowledge and actions in the face of uncertainty.

  12. Addressing uncertainty in adaptation planning for agriculture

    PubMed Central

    Vermeulen, Sonja J.; Challinor, Andrew J.; Thornton, Philip K.; Campbell, Bruce M.; Eriyagama, Nishadi; Vervoort, Joost M.; Kinyangi, James; Jarvis, Andy; Läderach, Peter; Ramirez-Villegas, Julian; Nicklin, Kathryn J.; Hawkins, Ed; Smith, Daniel R.

    2013-01-01

    We present a framework for prioritizing adaptation approaches at a range of timeframes. The framework is illustrated by four case studies from developing countries, each with associated characterization of uncertainty. Two cases on near-term adaptation planning in Sri Lanka and on stakeholder scenario exercises in East Africa show how the relative utility of capacity vs. impact approaches to adaptation planning differ with level of uncertainty and associated lead time. An additional two cases demonstrate that it is possible to identify uncertainties that are relevant to decision making in specific timeframes and circumstances. The case on coffee in Latin America identifies altitudinal thresholds at which incremental vs. transformative adaptation pathways are robust options. The final case uses three crop–climate simulation studies to demonstrate how uncertainty can be characterized at different time horizons to discriminate where robust adaptation options are possible. We find that impact approaches, which use predictive models, are increasingly useful over longer lead times and at higher levels of greenhouse gas emissions. We also find that extreme events are important in determining predictability across a broad range of timescales. The results demonstrate the potential for robust knowledge and actions in the face of uncertainty. PMID:23674681

  13. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    PubMed

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  14. Identification of severe accident uncertainties

    SciTech Connect

    Rivard, J.B.; Behr, V.L.; Easterling, R.G.; Griesmeyer, J.M.; Haskin, F.E.; Hatch, S.W.; Kolaczkowski, A.M.; Lipinski, R.J.; Sherman, M.P.; Taig, A.R.

    1984-09-01

    Understanding of severe accidents in light-water reactors is currently beset with uncertainty. Because the uncertainties that are present limit the capability to analyze the progression and possible consequences of such accidents, they restrict the technical basis for regulatory actions by the US Nuclear Regulatory Commission (NRC). It is thus necessary to attempt to identify the sources and quantify the influence of these uncertainties. As a part of ongoing NRC severe-accident programs at Sandia National Laboratories, a working group was formed to pool relevant knowledge and experience in assessing the uncertainties attending present (1983) knowledge of severe accidents. This initial report of the Severe Accident Uncertainty Analysis (SAUNA) working group has as its main goal the identification of a consolidated list of uncertainties that affect in-plant processes and systems. Many uncertainties have been identified. A set of key uncertainties summarizes many of the identified uncertainties. Quantification of the influence of these uncertainties, a necessary second step, is not attempted in the present report, although attempts are made qualitatively to demonstrate the relevance of the identified uncertainties.

  15. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  16. Patient autonomy and the challenge of clinical uncertainty.

    PubMed

    Parascandola, Mark; Hawkins, Jennifer; Danis, Marion

    2002-09-01

    Bioethicists have articulated an ideal of shared decision making between physician and patient, but in doing so the role of clinical uncertainty has not been adequately confronted. In the face of uncertainty about the patient's prognosis and the best course of treatment, many physicians revert to a model of nondisclosure and nondiscussion, thus closing off opportunities for shared decision making. Empirical studies suggest that physicians find it more difficult to adhere to norms of disclosure in situations where there is substantial uncertainty. They may be concerned that acknowledging their own uncertainty will undermine patient trust and create additional confusion and anxiety for the patient. We argue, in contrast, that effective disclosure will protect patient trust in the long run and that patients can manage information about uncertainty. In situations where there is substantial uncertainty, extra vigilance is required to ensure that patients are given the tools and information they need to participate in cooperative decision making about their care.

  17. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  18. Investment, regulation, and uncertainty

    PubMed Central

    Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose

    2014-01-01

    As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases.   This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745

  19. The maintenance of uncertainty

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    Introduction Preliminaries State-space dynamics Linearized dynamics of infinitesimal uncertainties Instantaneous infinitesimal dynamics Finite-time evolution of infinitesimal uncertainties Lyapunov exponents and predictability The Baker's apprentice map Infinitesimals and predictability Dimensions The Grassberger-Procaccia algorithm Towards a better estimate from Takens' estimators Space-time-separation diagrams Intrinsic limits to the analysis of geometry Takens' theorem The method of delays Noise Prediction, prophecy, and pontification Introduction Simulations, models and physics Ground rules Data-based models: dynamic reconstructions Analogue prediction Local prediction Global prediction Accountable forecasts of chaotic systems Evaluating ensemble forecasts The annulus Prophecies Aids for more reliable nonlinear analysis Significant results: surrogate data, synthetic data and self-deception Surrogate data and the bootstrap Surrogate predictors: Is my model any good? Hints for the evaluation of new techniques Avoiding simple straw men Feasibility tests for the identification of chaos On detecting "tiny" data sets Building models consistent with the observations Cost functions ι-shadowing: Is my model any good? (reprise) Casting infinitely long shadows (out-of-sample) Distinguishing model error and system sensitivity Forecast error and model sensitivity Accountability Residual predictability Deterministic or stochastic dynamics? Using ensembles to distinguish the expectation from the expected Numerical Weather Prediction Probabilistic prediction with a deterministic model The analysis Constructing and interpreting ensembles The outlook(s) for today Conclusion Summary

  20. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  1. Antarctic Photochemistry: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.; McConnell, Joseph R.

    1999-01-01

    Understanding the photochemistry of the Antarctic region is important for several reasons. Analysis of ice cores provides historical information on several species such as hydrogen peroxide and sulfur-bearing compounds. The former can potentially provide information on the history of oxidants in the troposphere and the latter may shed light on DMS-climate relationships. Extracting such information requires that we be able to model the photochemistry of the Antarctic troposphere and relate atmospheric concentrations to deposition rates and sequestration in the polar ice. This paper deals with one aspect of the uncertainty inherent in photochemical models of the high latitude troposphere: that arising from imprecision in the kinetic data used in the calculations. Such uncertainties in Antarctic models tend to be larger than those in models of mid to low latitude clean air. One reason is the lower temperatures which result in increased imprecision in kinetic data, assumed to be best characterized at 298K. Another is the inclusion of a DMS oxidation scheme in the present model. Many of the rates in this scheme are less precisely known than are rates in the standard chemistry used in many stratospheric and tropospheric models.

  2. Uncertainty in Wildfire Behavior

    NASA Astrophysics Data System (ADS)

    Finney, M.; Cohen, J. D.

    2013-12-01

    The challenge of predicting or modeling fire behavior is well recognized by scientists and managers who attempt predictions of fire spread rate or growth. At the scale of the spreading fire, the uncertainty in winds, moisture, fuel structure, and fire location make accurate predictions difficult, and the non-linear response of fire spread to these conditions means that average behavior is poorly represented by average environmental parameters. Even more difficult are estimations of threshold behaviors (e.g. spread/no-spread, crown fire initiation, ember generation and spotting) because the fire responds as a step-function to small changes in one or more environmental variables, translating to dynamical feedbacks and unpredictability. Recent research shows that ignition of fuel particles, itself a threshold phenomenon, depends on flame contact which is absolutely not steady or uniform. Recent studies of flame structure in both spreading and stationary fires reveals that much of the non-steadiness of the flames as they contact fuel particles results from buoyant instabilities that produce quasi-periodic flame structures. With fuel particle ignition produced by time-varying heating and short-range flame contact, future improvements in fire behavior modeling will likely require statistical approaches to deal with the uncertainty at all scales, including the level of heat transfer, the fuel arrangement, and weather.

  3. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  4. Bicep2. III. INSTRUMENTAL SYSTEMATICS

    SciTech Connect

    Ade, P. A. R.; Aikin, R. W.; Bock, J. J.; Brevik, J. A.; Filippini, J. P.; Golwala, S. R.; Hildebrandt, S. R.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Buder, I.; Karkare, K. S.; Bullock, E.; Dowell, C. D.; Duband, L.; Fliescher, S.; Halpern, M.; Hasselfield, M.; Hilton, G. C.; Irwin, K. D.; Collaboration: Bicep2 Collaboration; and others

    2015-12-01

    In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call “deprojection,” for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ∼10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. The contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10{sup −3}.

  5. BICEP2 III: Instrumental systematics

    SciTech Connect

    Ade, P. A. R.

    2015-11-23

    In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10–3.

  6. Bicep2 III: Instrumental Systematics

    NASA Astrophysics Data System (ADS)

    Bicep2 Collaboration; Ade, P. A. R.; Aikin, R. W.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Bock, J. J.; Brevik, J. A.; Buder, I.; Bullock, E.; Dowell, C. D.; Duband, L.; Filippini, J. P.; Fliescher, S.; Golwala, S. R.; Halpern, M.; Hasselfield, M.; Hildebrandt, S. R.; Hilton, G. C.; Irwin, K. D.; Karkare, K. S.; Kaufman, J. P.; Keating, B. G.; Kernasovskiy, S. A.; Kovac, J. M.; Kuo, C. L.; Leitch, E. M.; Lueker, M.; Netterfield, C. B.; Nguyen, H. T.; O'Brient, R.; Ogburn, R. W., IV; Orlando, A.; Pryke, C.; Richter, S.; Schwarz, R.; Sheehy, C. D.; Staniszewski, Z. K.; Sudiwala, R. V.; Teply, G. P.; Tolan, J. E.; Turner, A. D.; Vieregg, A. G.; Wong, C. L.; Yoon, K. W.

    2015-12-01

    In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call “deprojection,” for filtering the leading order beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ˜10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. The contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3-6) × 10-3.

  7. BICEP2 III: Instrumental systematics

    DOE PAGES

    Ade, P. A. R.

    2015-11-23

    In a companion paper, we have reported a >5σ detection of degree scale B-mode polarization at 150 GHz by the Bicep2 experiment. Here we provide a detailed study of potential instrumental systematic contamination to that measurement. We focus extensively on spurious polarization that can potentially arise from beam imperfections. We present a heuristic classification of beam imperfections according to their symmetries and uniformities, and discuss how resulting contamination adds or cancels in maps that combine observations made at multiple orientations of the telescope about its boresight axis. We introduce a technique, which we call "deprojection," for filtering the leading ordermore » beam-induced contamination from time-ordered data, and show that it reduces power in Bicep2's actual and null-test BB spectra consistent with predictions using high signal-to-noise beam shape measurements. We detail the simulation pipeline that we use to directly simulate instrumental systematics and the calibration data used as input to that pipeline. Finally, we present the constraints on BB contamination from individual sources of potential systematics. We find that systematics contribute BB power that is a factor of ~10× below Bicep2's three-year statistical uncertainty, and negligible compared to the observed BB signal. Lastly, the contribution to the best-fit tensor/scalar ratio is at a level equivalent to r = (3–6) × 10–3.« less

  8. Flight Departure Delay and Rerouting Under Uncertainty in En Route Convective Weather

    NASA Technical Reports Server (NTRS)

    Mukherjee, Avijit; Grabbe, Shon; Sridhar, Banavar

    2011-01-01

    Delays caused by uncertainty in weather forecasts can be reduced by improving traffic flow management decisions. This paper presents a methodology for traffic flow management under uncertainty in convective weather forecasts. An algorithm for assigning departure delays and reroutes to aircraft is presented. Departure delay and route assignment are executed at multiple stages, during which, updated weather forecasts and flight schedules are used. At each stage, weather forecasts up to a certain look-ahead time are treated as deterministic and flight scheduling is done to mitigate the impact of weather on four-dimensional flight trajectories. Uncertainty in weather forecasts during departure scheduling results in tactical airborne holding of flights. The amount of airborne holding depends on the accuracy of forecasts as well as the look-ahead time included in the departure scheduling. The weather forecast look-ahead time is varied systematically within the experiments performed in this paper to analyze its effect on flight delays. Based on the results, longer look-ahead times cause higher departure delays and additional flying time due to reroutes. However, the amount of airborne holding necessary to prevent weather incursions reduces when the forecast look-ahead times are higher. For the chosen day of traffic and weather, setting the look-ahead time to 90 minutes yields the lowest total delay cost.

  9. Uncertainty and error in complex plasma chemistry models

    NASA Astrophysics Data System (ADS)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  10. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  11. Uncertainty relation in Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng

    2015-04-01

    We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 ⁡ c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  12. DNA systematics. Volume II

    SciTech Connect

    Dutta, S.K.

    1986-01-01

    This book discusses the following topics: PLANTS: PLANT DNA: Contents and Systematics. Repeated DNA Sequences and Polyploidy in Cereal Crops. Homology of Nonrepeated DNA Sequences in Phylogeny of Fungal Species. Chloropast DNA and Phylogenetic Relationships. rDNA: Evolution Over a Billion Years. 23S rRNA-derived Small Ribosomal RNAs: Their Structure and Evolution with Reference to Plant Phylogeny. Molecular Analysis of Plant DNA Genomes: Conserved and Diverged DNA Sequences. A Critical Review of Some Terminologies Used for Additional DNA in Plant Chromosomes and Index.

  13. Uncertainties in stellar ages provided by grid techniques

    NASA Astrophysics Data System (ADS)

    Prada Moroni, P. G.; Valle, G.; Dell'Omodarme, M.; Degl'Innocenti, S.

    2016-09-01

    The determination of the age of single stars by means of grid-based techniques is a well established method. We discuss the impact on these estimates of the uncertainties in several ingredients routinely adopted in stellar computations. The systematic bias on age determination caused by varying the assumed initial helium abundance, the mixing-length and convective core overshooting parameters, and the microscopic diffusion are quantified and compared with the statistical error owing to the current uncertainty in the observations. The typical uncertainty in the observations accounts for 1 σ statistical relative error in age determination ranging on average from about -35 % to +42 %, depending on the mass. However, the age's relative error strongly depends on the evolutionary phase and can be higher than 120 % for stars near the zero-age main-sequence, while it is typically about 20 % or lower in the advanced main-sequence phase. A variation of ± 1 in the helium-to-metal enrichment ratio induces a quite modest systematic bias on age estimates. The maximum bias due to the presence of the convective core overshooting is -7 % for β = 0.2 and -13 % for β = 0.4. The main sources of bias are the uncertainty in the mixing-length value and the neglect of microscopic diffusion, which account each for a bias comparable to the random error uncertainty.

  14. Uncertainty as Certaint

    NASA Astrophysics Data System (ADS)

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  15. Living with uncertainty

    SciTech Connect

    Rau, N.; Fong, C.C.; Grigg, C.H.; Silverstein, B.

    1994-11-01

    In the electric utility industry, only one thing can be guaranteed with absolute certainty: one lives and works with many unknowns. Thus, the industry has embraced probability methods to varying degrees over the last 25 years. These techniques aid decision makers in planning, operations, and maintenance by quantifying uncertainty. Examples include power system reliability, production costing simulation, and assessment of environmental factors. A series of brainstorming sessions was conducted by the Application of Probability Methods (APM) Subcommittee of the IEEE Power Engineering Society to identify research and development needs and to ask the question, ''where should we go from here '' The subcommittee examined areas of need in data development, applications, and methods for decision making. The purpose of this article is to share the thoughts of APM members with a broader audience to the findings and to invite comments and participation.

  16. Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code

    NASA Astrophysics Data System (ADS)

    Cabellos, O.; Sanz, J.; Rodríguez, A.; González, E.; Embid, M.; Alvarez, F.; Reyes, S.

    2005-05-01

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.

  17. Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code

    SciTech Connect

    Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S

    2005-02-11

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.

  18. Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code

    SciTech Connect

    Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.

    2005-05-24

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.

  19. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    SciTech Connect

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.; Ratzlaff, Pete; Siemiginowska, Aneta E-mail: vkashyap@cfa.harvard.edu E-mail: rpete@head.cfa.harvard.edu

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.

  20. The impact of uncertainty on shape optimization of idealized bypass graft models in unsteady flow

    NASA Astrophysics Data System (ADS)

    Sankaran, Sethuraman; Marsden, Alison L.

    2010-12-01

    It is well known that the fluid mechanics of bypass grafts impacts biomechanical responses and is linked to intimal thickening and plaque deposition on the vessel wall. In spite of this, quantitative information about the fluid mechanics is not currently incorporated into surgical planning and bypass graft design. In this work, we use a derivative-free optimization technique for performing systematic design of bypass grafts. The optimization method is coupled to a three-dimensional pulsatile Navier-Stokes solver. We systematically account for inevitable uncertainties that arise in cardiovascular simulations, owing to noise in medical image data, variable physiologic conditions, and surgical implementation. Uncertainties in the simulation input parameters as well as shape design variables are accounted for using the adaptive stochastic collocation technique. The derivative-free optimization framework is coupled with a stochastic response surface technique to make the problem computationally tractable. Two idealized numerical examples, an end-to-side anastomosis, and a bypass graft around a stenosis, demonstrate that accounting for uncertainty significantly changes the optimal graft design. Results show that small changes in the design variables from their optimal values should be accounted for in surgical planning. Changes in the downstream (distal) graft angle resulted in greater sensitivity of the wall-shear stress compared to changes in the upstream (proximal) angle. The impact of cost function choice on the optimal solution was explored. Additionally, this work represents the first use of the stochastic surrogate management framework method for robust shape optimization in a fully three-dimensional unsteady Navier-Stokes design problem.

  1. Managing uncertainty in collaborative robotics engineering projects: The influence of task structure and peer interaction

    NASA Astrophysics Data System (ADS)

    Jordan, Michelle

    Uncertainty is ubiquitous in life, and learning is an activity particularly likely to be fraught with uncertainty. Previous research suggests that students and teachers struggle in their attempts to manage the psychological experience of uncertainty and that students often fail to experience uncertainty when uncertainty may be warranted. Yet, few educational researchers have explicitly and systematically observed what students do, their behaviors and strategies, as they attempt to manage the uncertainty they experience during academic tasks. In this study I investigated how students in one fifth grade class managed uncertainty they experienced while engaged in collaborative robotics engineering projects, focusing particularly on how uncertainty management was influenced by task structure and students' interactions with their peer collaborators. The study was initiated at the beginning of instruction related to robotics engineering and preceded through the completion of several long-term collaborative robotics projects, one of which was a design project. I relied primarily on naturalistic observation of group sessions, semi-structured interviews, and collection of artifacts. My data analysis was inductive and interpretive, using qualitative discourse analysis techniques and methods of grounded theory. Three theoretical frameworks influenced the conception and design of this study: community of practice, distributed cognition, and complex adaptive systems theory. Uncertainty was a pervasive experience for the students collaborating in this instructional context. Students experienced uncertainty related to the project activity and uncertainty related to the social system as they collaborated to fulfill the requirements of their robotics engineering projects. They managed their uncertainty through a diverse set of tactics for reducing, ignoring, maintaining, and increasing uncertainty. Students experienced uncertainty from more different sources and used more and

  2. Parameter uncertainty for ASP models

    SciTech Connect

    Knudsen, J.K.; Smith, C.L.

    1995-10-01

    The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more than one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.

  3. Impact of discharge data uncertainty on nutrient load uncertainty

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars

    2016-04-01

    Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.

  4. Using data assimilation for systematic model improvement

    NASA Astrophysics Data System (ADS)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  5. Reviewing the literature, how systematic is systematic?

    PubMed

    MacLure, Katie; Paudyal, Vibhu; Stewart, Derek

    2016-06-01

    Introduction Professor Archibald Cochrane, after whom the Cochrane Collaboration is named, was influential in promoting evidence-based clinical practice. He called for "relevant, valid research" to underpin all aspects of healthcare. Systematic reviews of the literature are regarded as a high quality source of cumulative evidence but it is unclear how truly systematic they, or other review articles, are or 'how systematic is systematic?' Today's evidence-based review industry is a burgeoning mix of specialist terminology, collaborations and foundations, databases, portals, handbooks, tools, criteria and training courses. Aim of the review This study aims to identify uses and types of reviews, key issues in planning, conducting, reporting and critiquing reviews, and factors which limit claims to be systematic. Method A rapid review of review articles published in IJCP. Results This rapid review identified 17 review articles published in IJCP between 2010 and 2015 inclusive. It explored the use of different types of review article, the variation and widely available range of guidelines, checklists and criteria which, through systematic application, aim to promote best practice. It also identified common pitfalls in endeavouring to conduct reviews of the literature systematically. Discussion Although a limited set of IJCP reviews were identified, there is clear evidence of the variation in adoption and application of systematic methods. The burgeoning evidence industry offers the tools and guidelines required to conduct systematic reviews, and other types of review, systematically. This rapid review was limited to the database of one journal over a period of 6 years. Although this review was conducted systematically, it is not presented as a systematic review. Conclusion As a research community we have yet to fully engage with readily available guidelines and tools which would help to avoid the common pitfalls. Therefore the question remains, of not just IJCP but

  6. Scientific uncertainty and its relevance to science education

    NASA Astrophysics Data System (ADS)

    Ruggeri, Nancy Lee

    Uncertainty is inherent to scientific methods and practices, yet is it rarely explicitly discussed in science classrooms. Ironically, science is often equated with certainty in these contexts. Uncertainties that arise in science deserve special attention, as they are increasingly a part of public discussions and are susceptible to manipulation. Clarifying what is meant by scientific uncertainty would include identifying sources of uncertainty in scientific practice, and would help provide an instructional framework for understanding how scientists use methods, data, and models to justify claims about the natural world. This research introduces both a general typology of scientific uncertainty informed by a review of literature from a variety of perspectives, and two additional typologies that emerged from qualitative studies examining student essays about scientific uncertainty in two disciplinary contexts: biological evolution and global climate change. These typologies aim to provide leverage for curricular discussions about scientific knowledge and practices, and to help instructors interested in integrating scientific uncertainty into teaching these subjects. In particular, a focus on uncertainties in data and models can illustrate their integral relationship and can spark critical discussions about methods used to collect empirical data and the models used to explain them and make predictions. This research builds a case for integrating scientific uncertainty into science teaching and emphasizing its importance for understanding the practice of science within particular disciplinary contexts.

  7. Assessing Groundwater Model Uncertainty for the Central Nevada Test Area

    SciTech Connect

    Greg Pohll; Karl Pohlmann; Ahmed Hassan; Jenny Chapman; Todd Mihevc

    2002-06-14

    The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation.

  8. Uncertainty analysis of thermoreflectance measurements

    NASA Astrophysics Data System (ADS)

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J.

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.

  9. Uncertainty analysis of thermoreflectance measurements.

    PubMed

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica.

  10. Uncertainty in Air Quality Modeling.

    NASA Astrophysics Data System (ADS)

    Fox, Douglas G.

    1984-01-01

    Under the direction of the AMS Steering Committee for the EPA Cooperative Agreement on Air Quality Modeling, a small group of scientists convened to consider the question of uncertainty in air quality modeling. Because the group was particularly concerned with the regulatory use of models, its discussion focused on modeling tall stack, point source emissions.The group agreed that air quality model results should be viewed as containing both reducible error and inherent uncertainty. Reducible error results from improper or inadequate meteorological and air quality data inputs, and from inadequacies in the models. Inherent uncertainty results from the basic stochastic nature of the turbulent atmospheric motions that are responsible for transport and diffusion of released materials. Modelers should acknowledge that all their predictions to date contain some associated uncertainty and strive also to quantify uncertainty.How can the uncertainty be quantified? There was no consensus from the group as to precisely how uncertainty should be calculated. One subgroup, which addressed statistical procedures, suggested that uncertainty information could be obtained from comparisons of observations and predictions. Following recommendations from a previous AMS workshop on performance evaluation (Fox. 1981), the subgroup suggested construction of probability distribution functions from the differences between observations and predictions. Further, they recommended that relatively new computer-intensive statistical procedures be considered to improve the quality of uncertainty estimates for the extreme value statistics of interest in regulatory applications.A second subgroup, which addressed the basic nature of uncertainty in a stochastic system, also recommended that uncertainty be quantified by consideration of the differences between observations and predictions. They suggested that the average of the difference squared was appropriate to isolate the inherent uncertainty that

  11. Uncertainties in the Deprojection of the Observed Bar Properties

    NASA Astrophysics Data System (ADS)

    Zou, Yanfei; Shen, Juntai; Li, Zhao-Yu

    2014-08-01

    In observations, it is important to deproject the two fundamental quantities characterizing a bar, i.e., its length (a) and ellipticity (e), to face-on values before any careful analyses. However, systematic estimation on the uncertainties of the commonly used deprojection methods is still lacking. Simulated galaxies are well suited in this study. We project two simulated barred galaxies onto a two-dimensional (2D) plane with different bar orientations and disk inclination angles (i). Bar properties are measured and deprojected with the popular deprojection methods in the literature. Generally speaking, deprojection uncertainties increase with increasing i. All of the deprojection methods behave badly when i is larger than 60°, due to the vertical thickness of the bar. Thus, future statistical studies of barred galaxies should exclude galaxies more inclined than 60°. At moderate inclination angles (i <= 60°), 2D deprojection methods (analytical and image stretching), and Fourier-based methods (Fourier decomposition and bar-interbar contrast) perform reasonably well with uncertainties ~10% in both the bar length and ellipticity, whereas the uncertainties of the one-dimensional (1D) analytical deprojection can be as high as 100% in certain extreme cases. We find that different bar measurement methods show systematic differences in the deprojection uncertainties. We further discuss the deprojection uncertainty factors with the emphasis on the most important one, i.e., the three-dimensional structure of the bar itself. We construct two triaxial toy bar models that can qualitatively reproduce the results of the 1D and 2D analytical deprojections; they confirm that the vertical thickness of the bar is the main source of uncertainties.

  12. Uncertainties in the deprojection of the observed bar properties

    SciTech Connect

    Zou, Yanfei; Shen, Juntai; Li, Zhao-Yu

    2014-08-10

    In observations, it is important to deproject the two fundamental quantities characterizing a bar, i.e., its length (a) and ellipticity (e), to face-on values before any careful analyses. However, systematic estimation on the uncertainties of the commonly used deprojection methods is still lacking. Simulated galaxies are well suited in this study. We project two simulated barred galaxies onto a two-dimensional (2D) plane with different bar orientations and disk inclination angles (i). Bar properties are measured and deprojected with the popular deprojection methods in the literature. Generally speaking, deprojection uncertainties increase with increasing i. All of the deprojection methods behave badly when i is larger than 60°, due to the vertical thickness of the bar. Thus, future statistical studies of barred galaxies should exclude galaxies more inclined than 60°. At moderate inclination angles (i ≤ 60°), 2D deprojection methods (analytical and image stretching), and Fourier-based methods (Fourier decomposition and bar-interbar contrast) perform reasonably well with uncertainties ∼10% in both the bar length and ellipticity, whereas the uncertainties of the one-dimensional (1D) analytical deprojection can be as high as 100% in certain extreme cases. We find that different bar measurement methods show systematic differences in the deprojection uncertainties. We further discuss the deprojection uncertainty factors with the emphasis on the most important one, i.e., the three-dimensional structure of the bar itself. We construct two triaxial toy bar models that can qualitatively reproduce the results of the 1D and 2D analytical deprojections; they confirm that the vertical thickness of the bar is the main source of uncertainties.

  13. A review of uncertainty visualization within the IPCC reports

    NASA Astrophysics Data System (ADS)

    Nocke, Thomas; Reusser, Dominik; Wrobel, Markus

    2015-04-01

    Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that

  14. Uncertainties on lung doses from inhaled plutonium.

    PubMed

    Puncher, Matthew; Birchall, Alan; Bull, Richard K

    2011-10-01

    In a recent epidemiological study, Bayesian uncertainties on lung doses have been calculated to determine lung cancer risk from occupational exposures to plutonium. These calculations used a revised version of the Human Respiratory Tract Model (HRTM) published by the ICRP. In addition to the Bayesian analyses, which give probability distributions of doses, point estimates of doses (single estimates without uncertainty) were also provided for that study using the existing HRTM as it is described in ICRP Publication 66; these are to be used in a preliminary analysis of risk. To infer the differences between the point estimates and Bayesian uncertainty analyses, this paper applies the methodology to former workers of the United Kingdom Atomic Energy Authority (UKAEA), who constituted a subset of the study cohort. The resulting probability distributions of lung doses are compared with the point estimates obtained for each worker. It is shown that mean posterior lung doses are around two- to fourfold higher than point estimates and that uncertainties on doses vary over a wide range, greater than two orders of magnitude for some lung tissues. In addition, we demonstrate that uncertainties on the parameter values, rather than the model structure, are largely responsible for these effects. Of these it appears to be the parameters describing absorption from the lungs to blood that have the greatest impact on estimates of lung doses from urine bioassay. Therefore, accurate determination of the chemical form of inhaled plutonium and the absorption parameter values for these materials is important for obtaining reliable estimates of lung doses and hence risk from occupational exposures to plutonium.

  15. Analysis of Infiltration Uncertainty

    SciTech Connect

    R. McCurley

    2003-10-27

    The primary objectives of this uncertainty analysis are: (1) to develop and justify a set of uncertain parameters along with associated distributions; and (2) to use the developed uncertain parameter distributions and the results from selected analog site calculations done in ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]) to obtain the net infiltration weighting factors for the glacial transition climate. These weighting factors are applied to unsaturated zone (UZ) flow fields in Total System Performance Assessment (TSPA), as outlined in the ''Total System Performance Assessment-License Application Methods and Approach'' (BSC 2002 [160146], Section 3.1) as a method for the treatment of uncertainty. This report is a scientific analysis because no new and mathematical physical models are developed herein, and it is based on the use of the models developed in or for ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). Any use of the term model refers to those developed in the infiltration numerical model report. TSPA License Application (LA) has included three distinct climate regimes in the comprehensive repository performance analysis for Yucca Mountain: present-day, monsoon, and glacial transition. Each climate regime was characterized using three infiltration-rate maps, including a lower- and upper-bound and a mean value (equal to the average of the two boundary values). For each of these maps, which were obtained based on analog site climate data, a spatially averaged value was also calculated by the USGS. For a more detailed discussion of these infiltration-rate maps, see ''Simulation of Net Infiltration for Modern and Potential Future Climates'' (USGS 2001 [160355]). For this Scientific Analysis Report, spatially averaged values were calculated for the lower-bound, mean, and upper-bound climate analogs only for the glacial transition climate regime, within the

  16. Identifying sources of uncertainty using covariance analysis

    NASA Astrophysics Data System (ADS)

    Hyslop, N. P.; White, W. H.

    2010-12-01

    Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and

  17. Uncertainty and inference in deterministic and noisy chaos

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher Charles

    We study dynamical systems which exhibit chaotic dynamics with a focus on sources of real and apparent randomness including sensitivity to perturbation, dynamical noise, measurement uncertainty and finite data samples for inference. This choice of topics is motivated by a desire to adapt established theoretical tools such as the Perron-Frobenius operator and symbolic dynamics to experimental data. First, we study prediction of chaotic time series when a perfect model is available but the initial condition is measured with uncertainty. A common approach for predicting future data given these circumstances is to apply the model despite the uncertainty. In systems with fold dynamics, we find prediction is improved over this strategy by recognizing this behavior. A systematic study of the logistic map demonstrates prediction can be extended three time steps using an approximation of the relevant Perron-Frobenius operator dynamics. Next, we show how to infer kth order Markov chains from finite data by applying Bayesian methods to both parameter estimation and model-order selection. In this pursuit, we connect inference to statistical mechanics through information-theoretic (type theory) techniques and establish a direct relationship between Bayesian evidence and the partition function. This allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Also, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes. Finally, we study a binary partition of time series data from the logistic map with additive noise, inferring optimal, effectively generating partitions and kth order Markov chain models. Here we adapt Bayesian inference, developed above, for applied symbolic dynamics. We show that reconciling Kolmogorov's maximum-entropy partition with the methods of Bayesian model selection requires the use of two separate

  18. Impact of parton distribution function and {alpha}{sub s} uncertainties on Higgs boson production in gluon fusion at hadron colliders

    SciTech Connect

    Demartin, Federico; Mariani, Elisa; Forte, Stefano; Vicini, Alessandro; Rojo, Juan

    2010-07-01

    We present a systematic study of uncertainties due to parton distributions (PDFs) and the strong coupling on the gluon-fusion production cross section of the standard model Higgs at the Tevatron and LHC colliders. We compare procedures and results when three recent sets of PDFs are used, CTEQ6.6, MSTW08, and NNPDF1.2, and we discuss specifically the way PDF and strong coupling uncertainties are combined. We find that results obtained from different PDF sets are in reasonable agreement if a common value of the strong coupling is adopted. We show that the addition in quadrature of PDF and {alpha}{sub s} uncertainties provides an adequate approximation to the full result with exact error propagation. We discuss a simple recipe to determine a conservative PDF+{alpha}{sub s} uncertainty from available global parton sets, and we use it to estimate this uncertainty on the given process to be about 10% at the Tevatron and 5% at the LHC for a light Higgs.

  19. Asymmetric Uncertainty Expression for High Gradient Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T

    2012-01-01

    When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.

  20. MODIS Radiometric Calibration and Uncertainty Assessment

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Chiang, Vincent; Sun, Junqiang; Wu, Aisheng

    2011-01-01

    Since launch, Terra and Aqua MODIS have collected more than II and 9 years of datasets for comprehensive studies of the Earth's land, ocean, and atmospheric properties. MODIS observations are made in 36 spectral bands: 20 reflective solar bands (RSB) and 16 thermal emissive bands (TEB). Compared to its heritage sensors, MODIS was developed with very stringent calibration and uncertainty requirements. As a result, MODIS was designed and built with a set of state of the art on-board calibrators (OBC), which allow key sensor performance parameters and on-orbit calibration coefficients to be monitored and updated if necessary. In terms of its calibration traceability, MODIS RSB calibration is reflectance based using an on-board solar diffuser (SD) and the TEB calibration is radiance based using an on-board blackbody (BB). In addition to on-orbit calibration coefficients derived from its OBC, calibration parameters determined from sensor pre-launch calibration and characterization are used in both the RSB and TEB calibration and retrieval algorithms. This paper provides a brief description of MODIS calibration methodologies and discusses details of its on-orbit calibration uncertainties. It assesses uncertainty contributions from individual components and differences between Terra and Aqua MODIS due to their design characteristics and on-orbit periormance. Also discussed in this paper is the use of MODIS LIB uncertainty index CUI) product.

  1. Pandemic influenza: certain uncertainties

    PubMed Central

    Morens, David M.; Taubenberger, Jeffery K.

    2011-01-01

    SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672

  2. Setup Uncertainties of Anatomical Sub-Regions in Head-and-Neck Cancer Patients After Offline CBCT Guidance

    SciTech Connect

    Kranen, Simon van; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2009-04-01

    Purpose: To quantify local geometrical uncertainties in anatomical sub-regions during radiotherapy for head-and-neck cancer patients. Methods and Materials: Local setup accuracy was analyzed for 38 patients, who had received intensity-modulated radiotherapy and were regularly scanned during treatment with cone beam computed tomography (CBCT) for offline patient setup correction. In addition to the clinically used large region of interest (ROI), we defined eight ROIs in the planning CT that contained rigid bony structures: the mandible, larynx, jugular notch, occiput bone, vertebrae C1-C3, C3-C5, and C5-C7, and the vertebrae caudal of C7. By local rigid registration to successive CBCT scans, the local setup accuracy of each ROI was determined and compared with the overall setup error assessed with the large ROI. Deformations were distinguished from rigid body movements by expressing movement relative to a reference ROI (vertebrae C1-C3). Results: The offline patient setup correction protocol using the large ROI resulted in residual systematic errors (1 SD) within 1.2 mm and random errors within 1.5 mm for each direction. Local setup errors were larger, ranging from 1.1 to 3.4 mm (systematic) and 1.3 to 2.5 mm (random). Systematic deformations ranged from 0.4 mm near the reference C1-C3 to 3.8 mm for the larynx. Random deformations ranged from 0.5 to 3.6 mm. Conclusion: Head-and-neck cancer patients show considerable local setup variations, exceeding residual global patient setup uncertainty in an offline correction protocol. Current planning target volume margins may be inadequate to account for these uncertainties. We propose registration of multiple ROIs to drive correction protocols and adaptive radiotherapy to reduce the impact of local setup variations.

  3. An integrated modeling approach to support management decisions of coupled groundwater-agricultural systems under multiple uncertainties

    NASA Astrophysics Data System (ADS)

    Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens

    2015-04-01

    The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.

  4. Summary of long-baseline systematics session at CETUP*2014

    SciTech Connect

    Cherdack, Daniel; Worcester, Elizabeth

    2015-10-15

    A session studying systematics in long-baseline neutrino oscillation physics was held July 14-18, 2014 as part of CETUP* 2014. Systematic effects from flux normalization and modeling, modeling of cross sections and nuclear interactions, and far detector effects were addressed. Experts presented the capabilities of existing and planned tools. A program of study to determine estimates of and requirements for the size of these effects was designed. This document summarizes the results of the CETUP* systematics workshop and the current status of systematic uncertainty studies in long-baseline neutrino oscillation measurements.

  5. Risk perception, uncertainty, and facility siting: Lessons from merchant power in California

    NASA Astrophysics Data System (ADS)

    Schively, Carissa

    This dissertation highlights the results of an investigation of the effects of uncertainty on siting decisions involving locally unwanted land uses (LULUs). Focusing specifically on the siting of natural gas-powered energy facilities in California, the analysis of data gathered from a survey of participants illustrates the effects of participants' uncertainties on siting processes and outcomes. The research focuses on four specific types of uncertainties: environmental risk uncertainty; solution uncertainty; interaction uncertainty; and commitment uncertainty. Environmental risk uncertainty is associated with perceived impacts on the environment. Solution uncertainty is tied to the process of evaluating and selecting proposed solutions or alternatives. Interaction uncertainty relates to the difficulty in determining the perceptions of others, the information that they hold, their preferences for solutions, and their likely actions. Commitment uncertainty influences participants' assessments of the credibility of commitments made by other parties in the siting process. The findings point to the presence of each of the four types of uncertainty among siting process participants. In addition, the research suggests that participants exhibited certain actions as a result of their uncertainties including questioning experts, exhibiting reduced trust, focusing on a narrow set of issues, and manipulating analyses of alternatives. Further, the findings provide insights into the influence of uncertainty on siting process outcomes such as decision optimality and conflict among participants. Overall, the research suggests the importance of understanding the underlying basis of LULU responses and the need to craft siting processes that mitigate or at least account for participants' uncertainties.

  6. Uncertainty and risk in wildland fire management: a review.

    PubMed

    Thompson, Matthew P; Calkin, Dave E

    2011-08-01

    Wildland fire management is subject to manifold sources of uncertainty. Beyond the unpredictability of wildfire behavior, uncertainty stems from inaccurate/missing data, limited resource value measures to guide prioritization across fires and resources at risk, and an incomplete scientific understanding of ecological response to fire, of fire behavior response to treatments, and of spatiotemporal dynamics involving disturbance regimes and climate change. This work attempts to systematically align sources of uncertainty with the most appropriate decision support methodologies, in order to facilitate cost-effective, risk-based wildfire planning efforts. We review the state of wildfire risk assessment and management, with a specific focus on uncertainties challenging implementation of integrated risk assessments that consider a suite of human and ecological values. Recent advances in wildfire simulation and geospatial mapping of highly valued resources have enabled robust risk-based analyses to inform planning across a variety of scales, although improvements are needed in fire behavior and ignition occurrence models. A key remaining challenge is a better characterization of non-market resources at risk, both in terms of their response to fire and how society values those resources. Our findings echo earlier literature identifying wildfire effects analysis and value uncertainty as the primary challenges to integrated wildfire risk assessment and wildfire management. We stress the importance of identifying and characterizing uncertainties in order to better quantify and manage them. Leveraging the most appropriate decision support tools can facilitate wildfire risk assessment and ideally improve decision-making. PMID:21489684

  7. Uncertainty in Simulating Wheat Yields Under Climate Change

    SciTech Connect

    Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Hatfield, Jerry; Ruane, Alex; Boote, K. J.; Thorburn, Peter; Rotter, R.P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P.K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, AJ; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, Robert; Heng, L.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, Roberto C.; Kersebaum, K.C.; Mueller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.O.; Olesen, JE; Osborne, T.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stockle, Claudio O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.

    2013-09-01

    Anticipating the impacts of climate change on crop yields is critical for assessing future food security. Process-based crop simulation models are the most commonly used tools in such assessments1,2. Analysis of uncertainties in future greenhouse gas emissions and their impacts on future climate change has been increasingly described in the literature3,4 while assessments of the uncertainty in crop responses to climate change are very rare. Systematic and objective comparisons across impact studies is difficult, and thus has not been fully realized5. Here we present the largest coordinated and standardized crop model intercomparison for climate change impacts on wheat production to date. We found that several individual crop models are able to reproduce measured grain yields under current diverse environments, particularly if sufficient details are provided to execute them. However, simulated climate change impacts can vary across models due to differences in model structures and algorithms. The crop-model component of uncertainty in climate change impact assessments was considerably larger than the climate-model component from Global Climate Models (GCMs). Model responses to high temperatures and temperature-by-CO2 interactions are identified as major sources of simulated impact uncertainties. Significant reductions in impact uncertainties through model improvements in these areas and improved quantification of uncertainty through multi-model ensembles are urgently needed for a more reliable translation of climate change scenarios into agricultural impacts in order to develop adaptation strategies and aid policymaking.

  8. The role of stimulus uncertainty in speech perception

    NASA Astrophysics Data System (ADS)

    Kewley-Port, Diane

    2001-05-01

    Among the important experimental factors that affect psychophysical measurements of speech perception is stimulus uncertainty. Charles Watson has defined stimulus uncertainty as variation in stimulus parameters from trial to trial and demonstrated its highly degrading effects on a variety of complex auditory signals. Watson, Kelley, and Wroton showed large (×10) elevation of frequency-discrimination thresholds for ``word-length tonal patterns'' under high uncertainty conditions [J. Acoust. Soc. Am. 60, 1176-1186 (1976)]. Investigations of speech, such as the perception of VOT (voice onset time) in stops [Kewley-Port, Watson, and Foyle, J. Acoust. Soc. Am. 83, 1113-1145 (1988)] and discrimination of vowel formants [Kewley-Port, J. Acoust. Soc. Am. 110 (2001)], have also demonstrated the systematic and profound effects of higher levels of stimulus uncertainty. This presentation will discuss extensions of the concept of stimulus uncertainty that demonstrate the degrading effects of the variability in more natural speech (versus synthetic speech) and longer phonetic context (including sentences) on vowel formant discrimination. Results from normal-hearing and hearing-impaired listeners demonstrating similar detrimental effects of high stimulus uncertainty will also be presented. [Research supported by NIH-NIDCD.

  9. Uncertainty and risk in wildland fire management: a review.

    PubMed

    Thompson, Matthew P; Calkin, Dave E

    2011-08-01

    Wildland fire management is subject to manifold sources of uncertainty. Beyond the unpredictability of wildfire behavior, uncertainty stems from inaccurate/missing data, limited resource value measures to guide prioritization across fires and resources at risk, and an incomplete scientific understanding of ecological response to fire, of fire behavior response to treatments, and of spatiotemporal dynamics involving disturbance regimes and climate change. This work attempts to systematically align sources of uncertainty with the most appropriate decision support methodologies, in order to facilitate cost-effective, risk-based wildfire planning efforts. We review the state of wildfire risk assessment and management, with a specific focus on uncertainties challenging implementation of integrated risk assessments that consider a suite of human and ecological values. Recent advances in wildfire simulation and geospatial mapping of highly valued resources have enabled robust risk-based analyses to inform planning across a variety of scales, although improvements are needed in fire behavior and ignition occurrence models. A key remaining challenge is a better characterization of non-market resources at risk, both in terms of their response to fire and how society values those resources. Our findings echo earlier literature identifying wildfire effects analysis and value uncertainty as the primary challenges to integrated wildfire risk assessment and wildfire management. We stress the importance of identifying and characterizing uncertainties in order to better quantify and manage them. Leveraging the most appropriate decision support tools can facilitate wildfire risk assessment and ideally improve decision-making.

  10. Uncertainty in Simulating Wheat Yields Under Climate Change

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.

    2013-01-01

    Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.

  11. Systematic in J-PARC/Hyper-K

    SciTech Connect

    Minamino, Akihiro

    2015-05-15

    The Hyper-Kamiokande (Hyper-K) detector is a next generation underground water Chrenkov detector. The J-PARC to Hyper-K experiment has good potential for precision measurements of neutrino oscillation parameters and discovery reach for CP violation in the lepton sector. With a total exposure of 10 years to a neutrino beam produced by the 750 kW J-PARC proton synchrotron, it is expected that the CP phase δ can be determined to better than 18 degree for all possible values of δ if sin{sup 2} 2θ{sub 13} > 0.03 and the mass hierarchy is known. Control of systematic uncertainties is critical to make maximum use of the Hyper-K potential. Based on learning from T2K experience, a strategy to reduce systematic uncertainties in J-PARC/Hyper-K are developed.

  12. On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    1997-01-01

    In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.

  13. Determination of the uncertainties of reflection coefficient measurements of a microwave network analyzer

    SciTech Connect

    Duda, L.E.; Moyer, R.D.

    1998-04-01

    A method that calculates the residual uncertainties of a microwave network analyzer for the frequency range of 300 kHz to 50 GHz is described. The method utilizes measurements on NIST-certified standards (such as an airline or load) plus additional measurements to estimate the combined standard uncertainties for measurements using the network analyzer. The uncertainties of the standards are incorporated by means of a Monte Carlo technique. The uncertainties assigned to a network analyzer then provide the basis for estimating the uncertainties assigned to devices measured using a network analyzer. The results of this method for characterizing network analyzer uncertainties are presented for several connector types.

  14. Systematic Alternatives to Proposal Preparation.

    ERIC Educational Resources Information Center

    Knirk, Frederick G.; And Others

    Educators who have to develop proposals must be concerned with making effective decisions. This paper discusses a number of educational systems management tools which can be used to reduce the time and effort in developing a proposal. In addition, ways are introduced to systematically increase the quality of the proposal through the development of…

  15. Mama Software Features: Uncertainty Testing

    SciTech Connect

    Ruggiero, Christy E.; Porter, Reid B.

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  16. Housing Uncertainty and Childhood Impatience

    ERIC Educational Resources Information Center

    Anil, Bulent; Jordan, Jeffrey L.; Zahirovic-Herbert, Velma

    2011-01-01

    The study demonstrates a direct link between housing uncertainty and children's time preferences, or patience. We show that students who face housing uncertainties through mortgage foreclosures and eviction learn impatient behavior and are therefore at greater risk of making poor intertemporal choices such as dropping out of school. We find that…

  17. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  18. Uncertainty in Integrated Assessment Scenarios

    SciTech Connect

    Mort Webster

    2005-10-17

    The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean

  19. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-01-01

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space. PMID:26234197

  20. Reformulating the Quantum Uncertainty Relation

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-01

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the “triviality” problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  1. Reformulating the Quantum Uncertainty Relation.

    PubMed

    Li, Jun-Li; Qiao, Cong-Feng

    2015-08-03

    Uncertainty principle is one of the cornerstones of quantum theory. In the literature, there are two types of uncertainty relations, the operator form concerning the variances of physical observables and the entropy form related to entropic quantities. Both these forms are inequalities involving pairwise observables, and are found to be nontrivial to incorporate multiple observables. In this work we introduce a new form of uncertainty relation which may give out complete trade-off relations for variances of observables in pure and mixed quantum systems. Unlike the prevailing uncertainty relations, which are either quantum state dependent or not directly measurable, our bounds for variances of observables are quantum state independent and immune from the "triviality" problem of having zero expectation values. Furthermore, the new uncertainty relation may provide a geometric explanation for the reason why there are limitations on the simultaneous determination of different observables in N-dimensional Hilbert space.

  2. Decisions on new product development under uncertainties

    NASA Astrophysics Data System (ADS)

    Huang, Yeu-Shiang; Liu, Li-Chen; Ho, Jyh-Wen

    2015-04-01

    In an intensively competitive market, developing a new product has become a valuable strategy for companies to establish their market positions and enhance their competitive advantages. Therefore, it is essential to effectively manage the process of new product development (NPD). However, since various problems may arise in NPD projects, managers should set up some milestones and subsequently construct evaluative mechanisms to assess their feasibility. This paper employed the approach of Bayesian decision analysis to deal with the two crucial uncertainties for NPD, which are the future market share and the responses of competitors. The proposed decision process can provide a systematic analytical procedure to determine whether an NPD project should be continued or not under the consideration of whether effective usage is being made of the organisational resources. Accordingly, the proposed decision model can assist the managers in effectively addressing the NPD issue under the competitive market.

  3. Communicating Storm Surge Forecast Uncertainty

    NASA Astrophysics Data System (ADS)

    Troutman, J. A.; Rhome, J.

    2015-12-01

    When it comes to tropical cyclones, storm surge is often the greatest threat to life and property along the coastal United States. The coastal population density has dramatically increased over the past 20 years, putting more people at risk. Informing emergency managers, decision-makers and the public about the potential for wind driven storm surge, however, has been extremely difficult. Recently, the Storm Surge Unit at the National Hurricane Center in Miami, Florida has developed a prototype experimental storm surge watch/warning graphic to help communicate this threat more effectively by identifying areas most at risk for life-threatening storm surge. This prototype is the initial step in the transition toward a NWS storm surge watch/warning system and highlights the inundation levels that have a 10% chance of being exceeded. The guidance for this product is the Probabilistic Hurricane Storm Surge (P-Surge) model, which predicts the probability of various storm surge heights by statistically evaluating numerous SLOSH model simulations. Questions remain, however, if exceedance values in addition to the 10% may be of equal importance to forecasters. P-Surge data from 2014 Hurricane Arthur is used to ascertain the practicality of incorporating other exceedance data into storm surge forecasts. Extracting forecast uncertainty information through analyzing P-surge exceedances overlaid with track and wind intensity forecasts proves to be beneficial for forecasters and decision support.

  4. Uncertainty in monitoring E. coli concentrations in streams and stormwater runoff

    NASA Astrophysics Data System (ADS)

    Harmel, R. D.; Hathaway, J. M.; Wagner, K. L.; Wolfe, J. E.; Karthikeyan, R.; Francesconi, W.; McCarthy, D. T.

    2016-03-01

    Microbial contamination of surface waters, a substantial public health concern throughout the world, is typically identified by fecal indicator bacteria such as Escherichia coli. Thus, monitoring E. coli concentrations is critical to evaluate current conditions, determine restoration effectiveness, and inform model development and calibration. An often overlooked component of these monitoring and modeling activities is understanding the inherent random and systematic uncertainty present in measured data. In this research, a review and subsequent analysis was performed to identify, document, and analyze measurement uncertainty of E. coli data collected in stream flow and stormwater runoff as individual discrete samples or throughout a single runoff event. Data on the uncertainty contributed by sample collection, sample preservation/storage, and laboratory analysis in measured E. coli concentrations were compiled and analyzed, and differences in sampling method and data quality scenarios were compared. The analysis showed that: (1) manual integrated sampling produced the lowest random and systematic uncertainty in individual samples, but automated sampling typically produced the lowest uncertainty when sampling throughout runoff events; (2) sample collection procedures often contributed the highest amount of uncertainty, although laboratory analysis introduced substantial random uncertainty and preservation/storage introduced substantial systematic uncertainty under some scenarios; and (3) the uncertainty in measured E. coli concentrations was greater than that of sediment and nutrients, but the difference was not as great as may be assumed. This comprehensive analysis of uncertainty in E. coli concentrations measured in streamflow and runoff should provide valuable insight for designing E. coli monitoring projects, reducing uncertainty in quality assurance efforts, regulatory and policy decision making, and fate and transport modeling.

  5. An active learning approach with uncertainty, representativeness, and diversity.

    PubMed

    He, Tianxu; Zhang, Shukui; Xin, Jie; Zhao, Pengpeng; Wu, Jian; Xian, Xuefeng; Li, Chunhua; Cui, Zhiming

    2014-01-01

    Big data from the Internet of Things may create big challenge for data classification. Most active learning approaches select either uncertain or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative and fail to take the diversity of instances into account. We address this challenge by presenting a new active learning framework which considers uncertainty, representativeness, and diversity creation. The proposed approach provides a systematic way for measuring and combining the uncertainty, representativeness, and diversity of an instance. Firstly, use instances' uncertainty and representativeness to constitute the most informative set. Then, use the kernel k-means clustering algorithm to filter the redundant samples and the resulting samples are queried for labels. Extensive experimental results show that the proposed approach outperforms several state-of-the-art active learning approaches. PMID:25180208

  6. A neural representation of categorization uncertainty in the human brain.

    PubMed

    Grinband, Jack; Hirsch, Joy; Ferrera, Vincent P

    2006-03-01

    The ability to classify visual objects into discrete categories ("friend" versus "foe"; "edible" versus "poisonous") is essential for survival and is a fundamental cognitive function. The cortical substrates that mediate this function, however, have not been identified in humans. To identify brain regions involved in stimulus categorization, we developed a task in which subjects classified stimuli according to a variable categorical boundary. Psychophysical functions were used to define a decision variable, categorization uncertainty, which was systematically manipulated. Using event-related functional MRI, we discovered that activity in a fronto-striatal-thalamic network, consisting of the medial frontal gyrus, anterior insula, ventral striatum, and dorsomedial thalamus, was modulated by categorization uncertainty. We found this network to be distinct from the frontoparietal attention network, consisting of the frontal and parietal eye fields, where activity was not correlated with categorization uncertainty. PMID:16504950

  7. Assessing hydrologic prediction uncertainty resulting from soft land cover classification

    NASA Astrophysics Data System (ADS)

    Loosvelt, Lien; De Baets, Bernard; Pauwels, Valentijn R. N.; Verhoest, Niko E. C.

    2014-09-01

    For predictions in ungauged basins (PUB), environmental data is generally not available and needs to be inferred by indirect means. Existing technologies such as remote sensing are valuable tools for estimating the lacking data, as these technologies become more widely available and have a high areal coverage. However, indirect estimates of the environmental characteristics are prone to uncertainty. Hence, an improved understanding of the quality of the estimates and the development of methods for dealing with their associated uncertainty are essential to evolve towards accurate PUB. In this study, the impact of the uncertainty associated with the classification of land cover based on multi-temporal SPOT imagery, resulting from the use of the Random Forests classifier, on the predictions of the hydrologic model TOPLATS is investigated through a Monte Carlo simulation. The results show that the predictions of evapotranspiration, runoff and baseflow are hardly affected by the classification uncertainty when area-averaged predictions are intended, implying that uncertainty propagation is only advisable in case a spatial distribution of the predictions is relevant for decision making or is coupled to other spatially distributed models. Based on the resulting uncertainty map, guidelines for additional data collection are formulated in order to reduce the uncertainty for future model applications. Because a Monte Carlo-based uncertainty analysis is computationally very demanding, especially when complex models are involved, we developed a fast indicative uncertainty assessment method that allows for generating proxies of the Monte Carlo-based result in terms of the mean prediction and its associated uncertainty based on a single model evaluation. These proxies are shown to perform well and provide a good indication of the impact of classification uncertainty on the prediction result.

  8. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    NASA Astrophysics Data System (ADS)

    Wagner, Ryan; Moon, Robert; Pratt, Jon; Shaw, Gordon; Raman, Arvind

    2011-11-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale resolution of both inorganic and biological surfaces and nanomaterials. We present a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements. We demonstrate the framework by quantifying uncertainty in AFM-based measurements of the transverse elastic modulus of cellulose nanocrystals (CNCs), an abundant, plant-derived nanomaterial whose mechanical properties are comparable to Kevlar fibers. For a single, isolated CNC the transverse elastic modulus was found to have a mean of 8.1 GPa and a 95% confidence interval of 2.7-20 GPa. A key result is that multiple replicates of force-distance curves do not sample the important sources of uncertainty, which are systematic in nature. The dominant source of uncertainty is the nondimensional photodiode sensitivity calibration rather than the cantilever stiffness or Z-piezo calibrations. The results underscore the great need for, and open a path towards, quantifying and minimizing uncertainty in AFM-based material property measurements of nanoparticles, nanostructured surfaces, thin films, polymers and biomaterials. This work is a partial contribution of the USDA Forest Service and NIST, agencies of the US government, and is not subject to copyright.

  9. Propagation of variance uncertainty calculation for an autopsy tissue analysis

    SciTech Connect

    Bruckner, L.A.

    1994-07-01

    When a radiochemical analysis is reported, it is often accompanied by an uncertainty value that simply reflects the natural variation in the observed counts due to radioactive decay, the so-called counting statistics. However, when the assay procedure is complex or when the number of counts is large, there are usually other important contributors to the total measurement uncertainty that need to be considered. An assay value is almost useless unless it is accompanied by a measure of the uncertainty associated with that value. The uncertainty value should reflect all the major sources of variation and bias affecting the assay and should provide a specified level of confidence. An approach to uncertainty calculation that includes the uncertainty due to instrument calibration, values of the standards, and intermediate measurements as well as counting statistics is presented and applied to the analysis of an autopsy tissue. This approach, usually called propagation of variance, attempts to clearly distinguish between errors that have systematic (bias) effects and those that have random effects on the assays. The effects of these different types of errors are then propagated to the assay using formal statistical techniques. The result is an uncertainty on the assay that has a defensible level of confidence and which can be traced to individual major contributors. However, since only measurement steps are readly quantified and since all models are approximations, it is emphasized that without empirical verification, a propagation of uncertainty model may be just a fancy model with no connection to reality. 5 refs., 1 fig., 2 tab.

  10. Amplification uncertainty relation for probabilistic amplifiers

    NASA Astrophysics Data System (ADS)

    Namiki, Ryo

    2015-09-01

    Traditionally, quantum amplification limit refers to the property of inevitable noise addition on canonical variables when the field amplitude of an unknown state is linearly transformed through a quantum channel. Recent theoretical studies have determined amplification limits for cases of probabilistic quantum channels or general quantum operations by specifying a set of input states or a state ensemble. However, it remains open how much excess noise on canonical variables is unavoidable and whether there exists a fundamental trade-off relation between the canonical pair in a general amplification process. In this paper we present an uncertainty-product form of amplification limits for general quantum operations by assuming an input ensemble of Gaussian-distributed coherent states. It can be derived as a straightforward consequence of canonical uncertainty relations and retrieves basic properties of the traditional amplification limit. In addition, our amplification limit turns out to give a physical limitation on probabilistic reduction of an Einstein-Podolsky-Rosen uncertainty. In this regard, we find a condition that probabilistic amplifiers can be regarded as local filtering operations to distill entanglement. This condition establishes a clear benchmark to verify an advantage of non-Gaussian operations beyond Gaussian operations with a feasible input set of coherent states and standard homodyne measurements.

  11. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions

  12. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for internal dosimetry. Volume 2: Appendices

    SciTech Connect

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.

    1998-04-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.

  13. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for deposited material and external doses. Volume 2: Appendices

    SciTech Connect

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Boardman, J.; Jones, J.A.; Harper, F.T.; Young, M.L.; Hora, S.C.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.

  14. Probabilistic accident consequence uncertainty analysis -- Early health effects uncertainty assessment. Volume 2: Appendices

    SciTech Connect

    Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on early health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.

  15. A posteriori uncertainty quantification of PIV-based pressure data

    NASA Astrophysics Data System (ADS)

    Azijli, Iliass; Sciacchitano, Andrea; Ragni, Daniele; Palha, Artur; Dwight, Richard P.

    2016-05-01

    A methodology for a posteriori uncertainty quantification of pressure data retrieved from particle image velocimetry (PIV) is proposed. It relies upon the Bayesian framework, where the posterior distribution (probability distribution of the true velocity, given the PIV measurements) is obtained from the prior distribution (prior knowledge of properties of the velocity field, e.g., divergence-free) and the statistical model of PIV measurement uncertainty. Once the posterior covariance matrix of the velocity is known, it is propagated through the discretized Poisson equation for pressure. Numerical assessment of the proposed method on a steady Lamb-Oseen vortex shows excellent agreement with Monte Carlo simulations, while linear uncertainty propagation underestimates the uncertainty in the pressure by up to 30 %. The method is finally applied to an experimental test case of a turbulent boundary layer in air, obtained using time-resolved tomographic PIV. Simultaneously with the PIV measurements, microphone measurements were carried out at the wall. The pressure reconstructed from the tomographic PIV data is compared to the microphone measurements. Realizing that the uncertainty of the latter is significantly smaller than the PIV-based pressure, this allows us to obtain an estimate for the true error of the former. The comparison between true error and estimated uncertainty demonstrates the accuracy of the uncertainty estimates on the pressure. In addition, enforcing the divergence-free constraint is found to result in a significantly more accurate reconstructed pressure field. The estimated uncertainty confirms this result.

  16. Latin hypercube approach to estimate uncertainty in ground water vulnerability.

    PubMed

    Gurdak, Jason J; McCray, John E; Thyne, Geoffrey; Qi, Sharon L

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. PMID:17470124

  17. Uncertainties of Mayak urine data

    SciTech Connect

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3 to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.

  18. Diagnosing and prioritizing uncertainties according to their relevance for policy: the case of transgene silencing.

    PubMed

    Krayer von Krauss, Martin Paul; Kaiser, Matthias; Almaas, Vibeke; van der Sluijs, Jeroen; Kloprogge, Penny

    2008-02-01

    Uncertainty often becomes problematic when science is used to support decision making in the policy process. Scientists can contribute to a more constructive approach to uncertainty by making their uncertainties transparent. In this article, an approach to systematic uncertainty diagnosis is demonstrated on the case study of transgene silencing and GMO risk assessment. Detailed interviews were conducted with five experts on transgene silencing to obtain quantitative and qualitative information on their perceptions of the uncertainty characterising our knowledge of the phenomena. The results indicate that there are competing interpretations of the cause-effect relationships leading to gene silencing (model structure uncertainty). In particular, the roles of post-transcriptional gene silencing, position effects, DNA-DNA interactions, direct-repeat DNA structures, recognition factors and dsRNA and aberrant zRNA are debated. The study highlights several sources of uncertainty beyond the statistical uncertainty commonly reported in risk assessment. The results also reveal a discrepancy between the way in which uncertainties would be prioritized on the basis of the uncertainty analysis conducted, and the way in which they would be prioritized on the basis of expert willingness to pay to eliminate uncertainty. The results also reveal a diversity of expert opinions on the uncertainty characterizing transgene silencing. Because the methodology used to diagnose uncertainties was successful in revealing a broad spectrum of uncertainties as well as a diversity of expert opinion, it is concluded that the methodology used could contribute to increasing transparency and fostering a critical discussion on uncertainty in the decision making process. PMID:17988720

  19. Diagnosing and prioritizing uncertainties according to their relevance for policy: the case of transgene silencing.

    PubMed

    Krayer von Krauss, Martin Paul; Kaiser, Matthias; Almaas, Vibeke; van der Sluijs, Jeroen; Kloprogge, Penny

    2008-02-01

    Uncertainty often becomes problematic when science is used to support decision making in the policy process. Scientists can contribute to a more constructive approach to uncertainty by making their uncertainties transparent. In this article, an approach to systematic uncertainty diagnosis is demonstrated on the case study of transgene silencing and GMO risk assessment. Detailed interviews were conducted with five experts on transgene silencing to obtain quantitative and qualitative information on their perceptions of the uncertainty characterising our knowledge of the phenomena. The results indicate that there are competing interpretations of the cause-effect relationships leading to gene silencing (model structure uncertainty). In particular, the roles of post-transcriptional gene silencing, position effects, DNA-DNA interactions, direct-repeat DNA structures, recognition factors and dsRNA and aberrant zRNA are debated. The study highlights several sources of uncertainty beyond the statistical uncertainty commonly reported in risk assessment. The results also reveal a discrepancy between the way in which uncertainties would be prioritized on the basis of the uncertainty analysis conducted, and the way in which they would be prioritized on the basis of expert willingness to pay to eliminate uncertainty. The results also reveal a diversity of expert opinions on the uncertainty characterizing transgene silencing. Because the methodology used to diagnose uncertainties was successful in revealing a broad spectrum of uncertainties as well as a diversity of expert opinion, it is concluded that the methodology used could contribute to increasing transparency and fostering a critical discussion on uncertainty in the decision making process.

  20. Models to estimate overall analytical measurements uncertainty: assumptions, comparisons and applications.

    PubMed

    Rozet, E; Rudaz, S; Marini, R D; Ziémons, E; Boulanger, B; Hubert, Ph

    2011-09-30

    Evaluation of analytical results reliability is of core importance as crucial decisions are taken with them. From the various methodologies to evaluate the fitness of purpose of analytical methods, overall measurement uncertainty estimation is more and more applied. Overall measurement uncertainty allows to combine simultaneously the remaining systematic influences to the random sources of uncertainty and allows assessing the reliability of results generated by analytical methods. However there are various interpretations on how to estimate overall measurement uncertainty, and thus various models for estimating it. Each model together with its assumptions has great impacts on the risks to abusively declare that analytical methods are suitable for their intended purpose. This review paper aims at (i) summarizing the various models used to estimate overall measurement uncertainty, (ii) provide their pros and cons, (iii) review the main areas of application and (iv) as a conclusion provide some recommendations when evaluating overall measurement uncertainty.

  1. Credible Computations: Standard and Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties

  2. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for deposited material and external doses. Volume 1: Main report

    SciTech Connect

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Boardman, J.; Jones, J.A.; Harper, F.T.; Young, M.L.; Hora, S.C.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models.

  3. On solar geoengineering and climate uncertainty

    NASA Astrophysics Data System (ADS)

    MacMartin, Douglas G.; Kravitz, Ben; Rasch, Philip J.

    2015-09-01

    Uncertain climate system response has been raised as a concern regarding solar geoengineering. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the intermodel spread across 12 climate models participating in the Geoengineering Model Intercomparison project. The model spread in simulations of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. That is, the model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. Furthermore, differences between models in their efficacy (the relative global mean temperature effect of solar versus CO2 radiative forcing) explain most of the regional differences between models in their response to an increased CO2 concentration that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks regarding uncertainty.

  4. On solar geoengineering and climate uncertainty

    SciTech Connect

    MacMartin, Douglas; Kravitz, Benjamin S.; Rasch, Philip J.

    2015-09-03

    Uncertainty in the climate system response has been raised as a concern regarding solar geoengineering. Here we show that model projections of regional climate change outcomes may have greater agreement under solar geoengineering than with CO2 alone. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the inter-model spread across 12 climate models participating in the Geoengineering Model Intercomparison project (GeoMIP). The model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. That is, the intermodel spread in predictions of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. Furthermore, differences in efficacy explain most of the differences between models in their temperature response to an increase in CO2 that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks.

  5. Uncertainty profiles for the validation of analytical methods.

    PubMed

    Saffaj, T; Ihssane, B

    2011-09-15

    This article aims to expose a new global strategy for the validation of analytical methods and the estimation of measurement uncertainty. Our purpose is to allow to researchers in the field of analytical chemistry get access to a powerful tool for the evaluation of quantitative analytical procedures. Indeed, the proposed strategy facilitates analytical validation by providing a decision tool based on the uncertainty profile and the β-content tolerance interval. Equally important, this approach allows a good estimate of measurement uncertainty by using data validation and without recourse to other additional experiments. In the example below, we confirmed the applicability of this new strategy for the validation of a chromatographic bioanalytical method and the good estimate of the measurement uncertainty without referring to any extra effort and additional experiments. A comparative study with the SFSTP approach showed that both strategies have selected the same calibration functions. The holistic character of the measurement uncertainty compared to the total error was influenced by our choice of profile uncertainty. Nevertheless, we think that the adoption of the uncertainty in the validation stage controls the risk of using the analytical method in routine phase.

  6. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship

  7. Communication in Creative Collaborations: The Challenges of Uncertainty and Desire Related to Task, Identity, and Relational Goals

    ERIC Educational Resources Information Center

    Jordan, Michelle E.; Babrow, Austin S.

    2013-01-01

    This study offers a systematic analysis of uncertainty in communication education by examining communication goals and challenges in the context of collaborative creative problem-solving in engineering assignments. Engineering design projects are seen as having the potential to help K-12 students learn to deal with uncertainty as well as a means…

  8. Mutualism Disruption Threatens Global Plant Biodiversity: A Systematic Review

    PubMed Central

    Aslan, Clare E.; Zavaleta, Erika S.; Tershy, Bernie; Croll, Donald

    2013-01-01

    Background As global environmental change accelerates, biodiversity losses can disrupt interspecific interactions. Extinctions of mutualist partners can create “widow” species, which may face reduced ecological fitness. Hypothetically, such mutualism disruptions could have cascading effects on biodiversity by causing additional species coextinctions. However, the scope of this problem – the magnitude of biodiversity that may lose mutualist partners and the consequences of these losses – remains unknown. Methodology/Principal Findings We conducted a systematic review and synthesis of data from a broad range of sources to estimate the threat posed by vertebrate extinctions to the global biodiversity of vertebrate-dispersed and -pollinated plants. Though enormous research gaps persist, our analysis identified Africa, Asia, the Caribbean, and global oceanic islands as geographic regions at particular risk of disruption of these mutualisms; within these regions, percentages of plant species likely affected range from 2.1–4.5%. Widowed plants are likely to experience reproductive declines of 40–58%, potentially threatening their persistence in the context of other global change stresses. Conclusions Our systematic approach demonstrates that thousands of species may be impacted by disruption in one class of mutualisms, but extinctions will likely disrupt other mutualisms, as well. Although uncertainty is high, there is evidence that mutualism disruption directly threatens significant biodiversity in some geographic regions. Conservation measures with explicit focus on mutualistic functions could be necessary to bolster populations of widowed species and maintain ecosystem functions. PMID:23840571

  9. Quantification and propagation of disciplinary uncertainty via Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Mantis, George Constantine

    2002-08-01

    Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single

  10. Uncertainty assessment tool for climate change impact indicators

    NASA Astrophysics Data System (ADS)

    Otto, Juliane; Keup-Thiel, Elke; Jacob, Daniela; Rechid, Diana; Lückenkötter, Johannes; Juckes, Martin

    2015-04-01

    A major difficulty in the study of climate change impact indicators is dealing with the numerous sources of uncertainties of climate and non-climate data . Its assessment, however, is needed to communicate to users the degree of certainty of climate change impact indicators. This communication of uncertainty is an important component of the FP7 project "Climate Information Portal for Copernicus" (CLIPC). CLIPC is developing a portal to provide a central point of access for authoritative scientific information on climate change. In this project the Climate Service Center 2.0 is in charge of the development of a tool to assess the uncertainty of climate change impact indicators. The calculation of climate change impact indicators will include climate data from satellite and in-situ observations, climate models and re-analyses, and non-climate data. There is a lack of a systematic classification of uncertainties arising from the whole range of climate change impact indicators. We develop a framework that intends to clarify the potential sources of uncertainty of a given indicator and provides - if possible - solutions how to quantify the uncertainties. To structure the sources of uncertainties of climate change impact indicators, we first classify uncertainties along a 'cascade of uncertainty' (Reyer 2013). Our cascade consists of three levels which correspond to the CLIPC meta-classification of impact indicators: Tier-1 indicators are intended to give information on the climate system. Tier-2 indicators attempt to quantify the impacts of climate change on biophysical systems (i.e. flood risks). Tier-3 indicators primarily aim at providing information on the socio-economic systems affected by climate change. At each level, the potential sources of uncertainty of the input data sets and its processing will be discussed. Reference: Reyer, C. (2013): The cascade of uncertainty in modeling forest ecosystem responses to environmental change and the challenge of sustainable

  11. Visualizing uncertainty about the future.

    PubMed

    Spiegelhalter, David; Pearson, Mike; Short, Ian

    2011-09-01

    We are all faced with uncertainty about the future, but we can get the measure of some uncertainties in terms of probabilities. Probabilities are notoriously difficult to communicate effectively to lay audiences, and in this review we examine current practice for communicating uncertainties visually, using examples drawn from sport, weather, climate, health, economics, and politics. Despite the burgeoning interest in infographics, there is limited experimental evidence on how different types of visualizations are processed and understood, although the effectiveness of some graphics clearly depends on the relative numeracy of an audience. Fortunately, it is increasingly easy to present data in the form of interactive visualizations and in multiple types of representation that can be adjusted to user needs and capabilities. Nonetheless, communicating deeper uncertainties resulting from incomplete or disputed knowledge--or from essential indeterminacy about the future--remains a challenge.

  12. Uncertainty analysis of thermoreflectance measurements.

    PubMed

    Yang, Jia; Ziade, Elbara; Schmidt, Aaron J

    2016-01-01

    We derive a generally applicable formula to calculate the precision of multi-parameter measurements that apply least squares algorithms. This formula, which accounts for experimental noise and uncertainty in the controlled model parameters, is then used to analyze the uncertainty of thermal property measurements with pump-probe thermoreflectance techniques. We compare the uncertainty of time domain thermoreflectance and frequency domain thermoreflectance (FDTR) when measuring bulk materials and thin films, considering simultaneous measurements of various combinations of thermal properties, including thermal conductivity, heat capacity, and thermal boundary conductance. We validate the uncertainty analysis using Monte Carlo simulations on data from FDTR measurements of an 80 nm gold film on fused silica. PMID:26827342

  13. Estimations of uncertainties of frequencies

    NASA Astrophysics Data System (ADS)

    Eyer, Laurent; Nicoletti, Jean-Marc; Morgenthaler, Stephan

    2015-08-01

    Diverse variable phenomena in the Universe are periodic. Astonishingly many of the periodic signals present in stars have timescales coinciding with human ones (from minutes to years). The periods of signals often have to be deduced from time series which are irregularly sampled and sparse, furthermore correlations between the brightness measurements and their estimated uncertainties are common.The uncertainty on the frequency estimation is reviewed. We explore the astronomical and statistical literature, in both cases of regular and irregular samplings. The frequency uncertainty is depending on signal to noise ratio, the frequency, the observational timespan. The shape of the light curve should also intervene, since sharp features such as exoplanet transits, stellar eclipses, raising branches of pulsation stars give stringent constraints.We propose several procedures (parametric and nonparametric) to estimate the uncertainty on the frequency which are subsequently tested against simulated data to assess their performances.

  14. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections.

  15. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. PMID:26695995

  16. Dynamical Realism and Uncertainty Propagation

    NASA Astrophysics Data System (ADS)

    Park, Inkwan

    In recent years, Space Situational Awareness (SSA) has become increasingly important as the number of tracked Resident Space Objects (RSOs) continues their growth. One of the most significant technical discussions in SSA is how to propagate state uncertainty in a consistent way with the highly nonlinear dynamical environment. In order to keep pace with this situation, various methods have been proposed to propagate uncertainty accurately by capturing the nonlinearity of the dynamical system. We notice that all of the methods commonly focus on a way to describe the dynamical system as precisely as possible based on a mathematical perspective. This study proposes a new perspective based on understanding dynamics of the evolution of uncertainty itself. We expect that profound insights of the dynamical system could present the possibility to develop a new method for accurate uncertainty propagation. These approaches are naturally concluded in goals of the study. At first, we investigate the most dominant factors in the evolution of uncertainty to realize the dynamical system more rigorously. Second, we aim at developing the new method based on the first investigation enabling orbit uncertainty propagation efficiently while maintaining accuracy. We eliminate the short-period variations from the dynamical system, called a simplified dynamical system (SDS), to investigate the most dominant factors. In order to achieve this goal, the Lie transformation method is introduced since this transformation can define the solutions for each variation separately. From the first investigation, we conclude that the secular variations, including the long-period variations, are dominant for the propagation of uncertainty, i.e., short-period variations are negligible. Then, we develop the new method by combining the SDS and the higher-order nonlinear expansion method, called state transition tensors (STTs). The new method retains advantages of the SDS and the STTs and propagates

  17. Uncertainty of empirical correlation equations

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.

    2016-08-01

    The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.

  18. Wildfire Decision Making Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Thompson, M.

    2013-12-01

    Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.

  19. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  20. Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps

    SciTech Connect

    Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

    1996-01-04

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

  1. Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment

    SciTech Connect

    Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes

    2012-04-01

    This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.

  2. Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment

    SciTech Connect

    Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes

    2013-03-01

    This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.

  3. Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report)

    SciTech Connect

    Gray, A.; Lewandowski, A.; Wendelin, T.

    2010-10-01

    In 1997, an uncertainty analysis was conducted of the Video Scanning Hartmann Optical Tester (VSHOT). In 2010, we have completed a new analysis, based primarily on the geometric optics of the system, and it shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough mirror panel test. These help to guide the operator in proper setup, and help end-users to understand the data they are provided. We include both the systematic (bias) and random (precision) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors we considered in this study are: target tilt; target face to laser output distance; instrument vertical offset; laser output angle; distance between the tool and the test piece; camera calibration; and laser scanner. These contributing factors were applied to the calculated slope error, focal length, and test article tilt that are generated by the VSHOT data processing. Results show the estimated 2-sigma uncertainty in slope error for a parabolic trough line scan test to be +/-0.2 milliradians; uncertainty in the focal length is +/- 0.1 mm, and the uncertainty in test article tilt is +/- 0.04 milliradians.

  4. Structural model uncertainty in stochastic simulation

    SciTech Connect

    McKay, M.D.; Morrison, J.D.

    1997-09-01

    Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.

  5. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    USGS Publications Warehouse

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  6. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  7. Uncertainty in Regional Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Digar, Antara

    concentrations (oxides of nitrogen) have been used to adjust probabilistic estimates of pollutant sensitivities based on the performance of simulations in reliably reproducing ambient measurements. Various observational metrics have been explored for better scientific understanding of how sensitivity estimates vary with measurement constraints. Future work could extend these methods to incorporate additional modeling uncertainties and alternate observational metrics, and explore the responsiveness of future air quality to project trends in emissions and climate change.

  8. MODIS On-orbit Calibration Uncertainty Assessment

    NASA Technical Reports Server (NTRS)

    Chiang, Vincent; Sun, Junqiang; Wu, Aisheng

    2011-01-01

    MODIS has 20 reflective solar bands (RSB) and 16 thermal emissive bands (TEB). Compared to its heritage sensors, MODIS was developed with very stringent calibration uncertainty requirements. As a result, MODIS was designed and built with a set of on-board calibrators (OBC), which allow key sensor performance parameters and on-orbit calibration coefficients to be monitored and updated. In terms of its calibration traceability, MODIS RSB calibration is reflectance based using an on-board solar diffuser (SD) and the TEB calibration is radiance based using an on-board blackbody (BB). In addition to on-orbit calibration coefficients derived from its OBC, calibration parameters determined from sensor pre-launch calibration and characterization are used in both the RSB and TEB calibration and retrieval algorithms. This paper provides a brief description of MODIS calibration methodologies and an in-depth analysis of its on-orbit calibration uncertainties. Also discussed in this paper are uncertainty contributions from individual components and differences due to Terra and Aqua MODIS instrument characteristics and on-orbit performance.

  9. Reducing Uncertainties in Neutron-Induced Fission Cross Sections Using a Time Projection Chamber

    NASA Astrophysics Data System (ADS)

    Manning, Brett; Niffte Collaboration

    2015-10-01

    Neutron-induced fission cross sections for actinides have long been of great interest for nuclear energy and stockpile stewardship. Traditionally, measurements were performed using fission chambers which provided limited information about the detected fission events. For the case of 239Pu(n,f), sensitivity studies have shown a need for more precise measurements. Recently the Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) has developed the fission Time Projection Chamber (fissionTPC) to measure fission cross sections to better than 1% uncertainty by providing 3D tracking of fission fragments. The fissionTPC collected data to calculate the 239Pu(n,f) cross section at the Weapons Neutron Research facility at the Los Alamos Neutron Science Center during the 2014 run cycle. Preliminary analysis has been focused on studying particle identification and target and beam non-uniformities to reduce the uncertainty on the cross section. Additionally, the collaboration is investigating other systematic errors that could not be well studied with a traditional fission chamber. LA-UR-15-24906.

  10. Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien

    2015-04-01

    Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a

  11. The Role of Uncertainty, Awareness, and Trust in Visual Analytics.

    PubMed

    Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A

    2016-01-01

    Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making. PMID:26529704

  12. Cost uncertainty for different levels of technology maturity

    SciTech Connect

    DeMuth, S.F.; Franklin, A.L.

    1996-08-07

    It is difficult at best to apply a single methodology for estimating cost uncertainties related to technologies of differing maturity. While highly mature technologies may have significant performance and manufacturing cost data available, less well developed technologies may be defined in only conceptual terms. Regardless of the degree of technical maturity, often a cost estimate relating to application of the technology may be required to justify continued funding for development. Yet, a cost estimate without its associated uncertainty lacks the information required to assess the economic risk. For this reason, it is important for the developer to provide some type of uncertainty along with a cost estimate. This study demonstrates how different methodologies for estimating uncertainties can be applied to cost estimates for technologies of different maturities. For a less well developed technology an uncertainty analysis of the cost estimate can be based on a sensitivity analysis; whereas, an uncertainty analysis of the cost estimate for a well developed technology can be based on an error propagation technique from classical statistics. It was decided to demonstrate these uncertainty estimation techniques with (1) an investigation of the additional cost of remediation due to beyond baseline, nearly complete, waste heel retrieval from underground storage tanks (USTs) at Hanford; and (2) the cost related to the use of crystalline silico-titanate (CST) rather than the baseline CS100 ion exchange resin for cesium separation from UST waste at Hanford.

  13. The Role of Uncertainty, Awareness, and Trust in Visual Analytics.

    PubMed

    Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A

    2016-01-01

    Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making.

  14. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  15. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  16. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the

  17. Uncertainty Analysis of non-point source pollution control facilities design techniques in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.; Okjeong, L.; Gyeong, C. B.; Park, M. W.; Kim, S.

    2015-12-01

    The design of non-point sources control facilities in Korea is divided largely by the stormwater capture ratio, the stormwater load capture ratio, and the pollutant reduction efficiency of the facility. The stormwater capture ratio is given by a design formula as a function of the water quality treatment capacity, the greater the capacity, the more the amount of stormwater intercepted by the facility. The stormwater load capture ratio is defined as the ratio of the load entering the facility of the total pollutant load generated in the target catchment, and is given as a design formula represented by a function of the stormwater capture ratio. In order to estimate the stormwater capture ratio and load capture ratio, a lot of quantitative analysis of hydrologic processes acted in pollutant emission is required, but these formulas have been applied without any verification. Since systematic monitoring programs were insufficient, verification of these formulas was fundamentally impossible. However, recently the Korean ministry of Environment has conducted an long-term systematic monitoring project, and thus the verification of the formulas became possible. In this presentation, the stormwater capture ratio and load capture ratio are re-estimated using actual TP data obtained from long-term monitoring program at Noksan industrial complex located in Busan, Korea. Through the re-estimated process, the uncertainty included in the design process that has been applied until now will be shown in a quantitative extent. In addition, each uncertainty included in the stormwater capture ratio estimation and in the stormwater load capture ratio estimation will be expressed to quantify the relative impact on the overall non-point pollutant control facilities design process. Finally, the SWMM-Matlab interlocking module for model parameters estimation will be introduced. Acknowledgement This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non

  18. Entropic uncertainty and measurement reversibility

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Wehner, Stephanie; Wilde, Mark M.

    2016-07-01

    The entropic uncertainty relation with quantum side information (EUR-QSI) from (Berta et al 2010 Nat. Phys. 6 659) is a unifying principle relating two distinctive features of quantum mechanics: quantum uncertainty due to measurement incompatibility, and entanglement. In these relations, quantum uncertainty takes the form of preparation uncertainty where one of two incompatible measurements is applied. In particular, the ‘uncertainty witness’ lower bound in the EUR-QSI is not a function of a post-measurement state. An insightful proof of the EUR-QSI from (Coles et al 2012 Phys. Rev. Lett. 108 210405) makes use of a fundamental mathematical consequence of the postulates of quantum mechanics known as the non-increase of quantum relative entropy under quantum channels. Here, we exploit this perspective to establish a tightening of the EUR-QSI which adds a new state-dependent term in the lower bound, related to how well one can reverse the action of a quantum measurement. As such, this new term is a direct function of the post-measurement state and can be thought of as quantifying how much disturbance a given measurement causes. Our result thus quantitatively unifies this feature of quantum mechanics with the others mentioned above. We have experimentally tested our theoretical predictions on the IBM quantum experience and find reasonable agreement between our predictions and experimental outcomes.

  19. Communicating Uncertainties on Climate Change

    NASA Astrophysics Data System (ADS)

    Planton, S.

    2009-09-01

    The term of uncertainty in common language is confusing since it is related in one of its most usual sense to what cannot be known in advance or what is subject to doubt. Its definition in mathematics is unambiguous but not widely shared. It is thus difficult to communicate on this notion through media to a wide public. From its scientific basis to the impact assessment, climate change issue is subject to a large number of sources of uncertainties. In this case, the definition of the term is close to its mathematical sense, but the diversity of disciplines involved in the analysis process implies a great diversity of approaches of the notion. Faced to this diversity of approaches, the issue of communicating uncertainties on climate change is thus a great challenge. It is also complicated by the diversity of the targets of the communication on climate change, from stakeholders and policy makers to a wide public. We will present the process chosen by the IPCC in order to communicate uncertainties in its assessment reports taking the example of the guidance note to lead authors of the fourth assessment report. Concerning the communication of uncertainties to a wide public, we will give some examples aiming at illustrating how to avoid the above-mentioned ambiguity when dealing with this kind of communication.

  20. Space radiation cancer risks and uncertainties for Mars missions.

    PubMed

    Cucinotta, F A; Schimmerling, W; Wilson, J W; Peterson, L E; Badhwar, G D; Saganti, P B; Dicello, J F

    2001-11-01

    Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks.

  1. Propagation of rating curve uncertainty in design flood estimation

    NASA Astrophysics Data System (ADS)

    Steinbakk, Gunnhildur H.; Thorarinsdottir, Thordis; Reitan, Trond; Schlichting, Lena; Hølleland, Sondre; Engeland, Kolbjørn

    2016-04-01

    Statistical flood frequency analysis is commonly performed based on a set of annual maximum discharge values which are derived from stage measurements via a stage-discharge rating curve model. However, design flood estimation techniques often ignore the uncertainty in the underlying rating curve model. Using data from seven gauging stations in Norway, we investigate both the marginal and the joint effects of curve and sample uncertainty on design flood estimation. In addition, we consider the importance of assessing the added value of large streamflow measurements at the high end of the rating curve and in the annual maximum data series. The sample uncertainty is generally the main contributor to uncertainty in design flood estimates. However, accounting for curve uncertainty may strongly influence the results if an extrapolation of the rating curve is necessary. An additional high direct streamflow measurement will reduce the extrapolation degree and the rating curve uncertainty, and most likely reduce estimation biases in the return levels. A high annual maximum flood observation might, if combined with a large extrapolation degree, introduce estimation biases for return levels since the estimation is based on combining two highly skewed distributions.

  2. Space Radiation Cancer Risks and Uncertainties for Mars Missions

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Badhwar, G. D.; Saganti, P. B.; Dicello, J. F.

    2001-01-01

    Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks.

  3. Space radiation cancer risks and uncertainties for Mars missions.

    PubMed

    Cucinotta, F A; Schimmerling, W; Wilson, J W; Peterson, L E; Badhwar, G D; Saganti, P B; Dicello, J F

    2001-11-01

    Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks. PMID:11604093

  4. Capturing the complexity of uncertainty language to maximise its use.

    NASA Astrophysics Data System (ADS)

    Juanchich, Marie; Sirota, Miroslav

    2016-04-01

    Uncertainty is often communicated verbally, using uncertainty phrases such as 'there is a small risk of earthquake', 'flooding is possible' or 'it is very likely the sea level will rise'. Prior research has only examined a limited number of properties of uncertainty phrases: mainly the probability conveyed (e.g., 'a small chance' convey a small probability whereas 'it is likely' convey a high probability). We propose a new analytical framework that captures more of the complexity of uncertainty phrases by studying their semantic, pragmatic and syntactic properties. Further, we argue that the complexity of uncertainty phrases is functional and can be leveraged to best describe uncertain outcomes and achieve the goals of speakers. We will present findings from a corpus study and an experiment where we assessed the following properties of uncertainty phrases: probability conveyed, subjectivity, valence, nature of the subject, grammatical category of the uncertainty quantifier and whether the quantifier elicits a positive or a negative framing. Natural language processing techniques applied to corpus data showed that people use a very large variety of uncertainty phrases representing different configurations of the properties of uncertainty phrases (e.g., phrases that convey different levels of subjectivity, phrases with different grammatical construction). In addition, the corpus analysis uncovered that uncertainty phrases commonly studied in psychology are not the most commonly used in real life. In the experiment we manipulated the amount of evidence indicating that a fact was true and whether the participant was required to prove the fact was true or that it was false. Participants produced a phrase to communicate the likelihood that the fact was true (e.g., 'it is not sure…', 'I am convinced that…'). The analyses of the uncertainty phrases produced showed that participants leveraged the properties of uncertainty phrases to reflect the strength of evidence but

  5. Communicating spatial uncertainty to non-experts using R

    NASA Astrophysics Data System (ADS)

    Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze

    2016-04-01

    Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R

  6. Analysis of uncertainties in turbine metal temperature predictions

    NASA Technical Reports Server (NTRS)

    Stepka, F. S.

    1980-01-01

    An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.

  7. Uncertain LDA: Including Observation Uncertainties in Discriminative Transforms.

    PubMed

    Saeidi, Rahim; Astudillo, Ramon Fernandez; Kolossa, Dorothea

    2016-07-01

    Linear discriminant analysis (LDA) is a powerful technique in pattern recognition to reduce the dimensionality of data vectors. It maximizes discriminability by retaining only those directions that minimize the ratio of within-class and between-class variance. In this paper, using the same principles as for conventional LDA, we propose to employ uncertainties of the noisy or distorted input data in order to estimate maximally discriminant directions. We demonstrate the efficiency of the proposed uncertain LDA on two applications using state-of-the-art techniques. First, we experiment with an automatic speech recognition task, in which the uncertainty of observations is imposed by real-world additive noise. Next, we examine a full-scale speaker recognition system, considering the utterance duration as the source of uncertainty in authenticating a speaker. The experimental results show that when employing an appropriate uncertainty estimation algorithm, uncertain LDA outperforms its conventional LDA counterpart.

  8. Quantification of climate change signals and uncertainty in hydrological change simulations for Denmark

    NASA Astrophysics Data System (ADS)

    Seaby, Lauren; Refsgaard, Jens Christian; Sonnenborg, Torben; Hesselbjerg Christensen, Jens

    2010-05-01

    This paper investigates some of the uncertainties related to the use of regional climate model (RCM) data in hydrological simulations at the local scale, and the significance of regional hydrological change predictions considering climate model uncertainties. In Denmark, future changes in climate are expected to result in more extreme hydrological conditions. Higher precipitation is predicted in winter resulting in flooding and water logging in low lying areas, whereas reduced precipitation and higher evapotranspiration are predicted during summer resulting in decreasing water tables, dry root zones and reduced low flows in streams. For a relatively small country like Denmark (approximately 43,000 km2) with climate largely influenced by the ocean, dynamically downscaled RCM outputs are appealing for use in studies of climate change impacts on water resources at the national scale. However, RCMs are subject to systematic errors and their outputs, especially precipitation, require further downscaling prior to use in hydrological simulations. Climate change and hindcast simulations from the period 1961 - 2100 are used from the recently completed EU project ENSEMBLES, which makes available a matrix of transient climate change scenarios for all of Europe at a 25 km2 grid scale. Multiple pairings of GCMs and RCMs in ENSEMBLES allow for differences both between multiple climate models and the uncertainty of the individual model predictions to be quantified in impact studies. The statistical bias correction method developed and validated by the EU Water and Global Change (WATCH) program is applied to 15 climate change simulations for Denmark, comprised of pairings between three GCMs and nine RCMs from the ENSEMBLES project. The WATCH method for correcting climate model output is based on intensity distributions of daily observations and does not distinguish between seasons. Observed station data from 1961 - 2009 in addition to 10 km2 gridded data from 1989 - 2009 is

  9. Sub-Heisenberg phase uncertainties

    NASA Astrophysics Data System (ADS)

    Pezzé, Luca

    2013-12-01

    Phase shift estimation with uncertainty below the Heisenberg limit, ΔϕHL∝1/N¯T, where N¯T is the total average number of particles employed, is a mirage of linear quantum interferometry. Recently, Rivas and Luis, [New J. Phys.NJOPFM1367-263010.1088/1367-2630/14/9/093052 14, 093052 (2012)] proposed a scheme to achieve a phase uncertainty Δϕ∝1/N¯Tk, with k an arbitrary exponent. This sparked an intense debate in the literature which, ultimately, does not exclude the possibility to overcome ΔϕHL at specific phase values. Our numerical analysis of the Rivas and Luis proposal shows that sub-Heisenberg uncertainties are obtained only when the estimator is strongly biased. No violation of the Heisenberg limit is found after bias correction or when using a bias-free Bayesian analysis.

  10. Fertility behaviour under income uncertainty.

    PubMed

    Ranjan, P

    1999-03-01

    A two-period stochastic model of fertility behavior was developed in order to provide an explanation for the staggering decrease in birth rates in former Soviet Republics and Eastern European countries. A link between income uncertainty and fertility behavior was proposed. The increase in uncertainty about future income could lead people to postpone their childbearing decision. This is attributable to the irreversibility of the childbearing decision and the ease with which it may be postponed. A threshold effect is the result, so that individuals above the threshold level of income tend to have a stronger desire to have a child immediately, and those below the threshold tend to wait until the income uncertainty is past. This behavioral pattern could account for the recent decline in birth rates that has accompanied a decreasing per capita income level in most of the former Soviet Republics and the East European countries.

  11. Uncertainty formulations for multislit interferometry

    NASA Astrophysics Data System (ADS)

    Biniok, Johannes C. G.

    2014-12-01

    In the context of (far-field) multislit interferometry we investigate the utility of two formulations of uncertainty in accounting for the complementarity of spatial localization and fringe width. We begin with a characterization of the relevant observables and general considerations regarding the suitability of different types of measures. The detailed analysis shows that both of the discussed uncertainty formulations yield qualitatively similar results, confirming that they correctly capture the relevant tradeoff. One approach, based on an idea of Aharonov and co-workers, is intuitively appealing and relies on a modification of the Heisenberg uncertainty relation. The other approach, developed by Uffink and Hilgevoord for single- and double-slit experiments, is readily applied to multislits. However, it is found that one of the underlying concepts requires generalization and that the choice of the parameters requires more consideration than was known.

  12. Significant predictors of patients' uncertainty in primary brain tumors.

    PubMed

    Lin, Lin; Chien, Lung-Chang; Acquaye, Alvina A; Vera-Bolanos, Elizabeth; Gilbert, Mark R; Armstrong, Terri S

    2015-05-01

    Patients with primary brain tumors (PBT) face uncertainty related to prognosis, symptoms and treatment response and toxicity. Uncertainty is correlated to negative mood states and symptom severity and interference. This study identified predictors of uncertainty during different treatment stages (newly-diagnosed, on treatment, followed-up without active treatment). One hundred eighty six patients with PBT were accrued at various points in the illness trajectory. Data collection tools included: a clinical checklist/a demographic data sheet/the Mishel Uncertainty in Illness Scale-Brain Tumor Form. The structured additive regression model was used to identify significant demographic and clinical predictors of illness-related uncertainty. Participants were primarily white (80 %) males (53 %). They ranged in age from 19-80 (mean = 44.2 ± 12.6). Thirty-two of the 186 patients were newly-diagnosed, 64 were on treatment at the time of clinical visit with MRI evaluation, 21 were without MRI, and 69 were not on active treatment. Three subscales (ambiguity/inconsistency; unpredictability-disease prognoses; unpredictability-symptoms and other triggers) were different amongst the treatment groups (P < .01). However, patients' uncertainty during active treatment was as high as in newly-diagnosed period. Other than treatment stages, change of employment status due to the illness was the most significant predictor of illness-related uncertainty. The illness trajectory of PBT remains ambiguous, complex, and unpredictable, leading to a high incidence of uncertainty. There was variation in the subscales of uncertainty depending on treatment status. Although patients who are newly diagnosed reported the highest scores on most of the subscales, patients on treatment felt more uncertain about unpredictability of symptoms than other groups. Due to the complexity and impact of the disease, associated symptoms, and interference with functional status, comprehensive assessment of patients

  13. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  14. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  15. Uncertainties in global ocean surface flux climatologies derived from ship observations

    SciTech Connect

    Gleckler, P.J.; Weare, B.C.

    1997-11-01

    A methodology to define uncertainties associated with ocean surface heat flux calculations has been developed and applied to a global climatology that utilizes a summary of the Comprehensive Ocean-Atmosphere Data Set surface observations. Systematic and random uncertainties in the net oceanic heat flux and each of its four components at individual grid points and for zonal averages have been estimated for each calendar month and for the annual mean. The most important uncertainties of the 2{degrees} x 2{degrees} grid cell values of each of the heat fluxes are described. 61 refs., 15 figs., 2 tabs.

  16. Statistics, Uncertainty, and Transmitted Variation

    SciTech Connect

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  17. Awe, uncertainty, and agency detection.

    PubMed

    Valdesolo, Piercarlo; Graham, Jesse

    2014-01-01

    Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events. PMID:24247728

  18. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  19. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  20. Carbon cycle uncertainty in the Alaskan Arctic

    NASA Astrophysics Data System (ADS)

    Fisher, J. B.; Sikka, M.; Oechel, W. C.; Huntzinger, D. N.; Melton, J. R.; Koven, C. D.; Ahlström, A.; Arain, A. M.; Baker, I.; Chen, J. M.; Ciais, P.; Davidson, C.; Dietze, M.; El-Masri, B.; Hayes, D.; Huntingford, C.; Jain, A.; Levy, P. E.; Lomas, M. R.; Poulter, B.; Price, D.; Sahoo, A. K.; Schaefer, K.; Tian, H.; Tomelleri, E.; Verbeeck, H.; Viovy, N.; Wania, R.; Zeng, N.; Miller, C. E.

    2014-02-01

    Climate change is leading to a disproportionately large warming in the high northern latitudes, but the magnitude and sign of the future carbon balance of the Arctic are highly uncertain. Using 40 terrestrial biosphere models for Alaska, we provide a baseline of terrestrial carbon cycle structural and parametric uncertainty, defined as the multi-model standard deviation (σ) against the mean (x\\bar) for each quantity. Mean annual uncertainty (σ/x\\bar) was largest for net ecosystem exchange (NEE) (-0.01± 0.19 kg C m-2 yr-1), then net primary production (NPP) (0.14 ± 0.33 kg C m-2 yr-1), autotrophic respiration (Ra) (0.09 ± 0.20 kg C m-2 yr-1), gross primary production (GPP) (0.22 ± 0.50 kg C m-2 yr-1), ecosystem respiration (Re) (0.23 ± 0.38 kg C m-2 yr-1), CH4 flux (2.52 ± 4.02 g CH4 m-2 yr-1), heterotrophic respiration (Rh) (0.14 ± 0.20 kg C m-2 yr-1), and soil carbon (14.0± 9.2 kg C m-2). The spatial patterns in regional carbon stocks and fluxes varied widely with some models showing NEE for Alaska as a strong carbon sink, others as a strong carbon source, while still others as carbon neutral. Additionally, a feedback (i.e., sensitivity) analysis was conducted of 20th century NEE to CO2 fertilization (β) and climate (γ), which showed that uncertainty in γ was 2x larger than that of β, with neither indicating that the Alaskan Arctic is shifting towards a certain net carbon sink or source. Finally, AmeriFlux data are used at two sites in the Alaskan Arctic to evaluate the regional patterns; observed seasonal NEE was captured within multi-model uncertainty. This assessment of carbon cycle uncertainties may be used as a baseline for the improvement of experimental and modeling activities, as well as a reference for future trajectories in carbon cycling with climate change in the Alaskan Arctic.

  1. TRITIUM UNCERTAINTY ANALYSIS FOR SURFACE WATER SAMPLES AT THE SAVANNAH RIVER SITE

    SciTech Connect

    Atkinson, R.

    2012-07-31

    Radiochemical analyses of surface water samples, in the framework of Environmental Monitoring, have associated uncertainties for the radioisotopic results reported. These uncertainty analyses pertain to the tritium results from surface water samples collected at five locations on the Savannah River near the U.S. Department of Energy's Savannah River Site (SRS). Uncertainties can result from the field-sampling routine, can be incurred during transport due to the physical properties of the sample, from equipment limitations, and from the measurement instrumentation used. The uncertainty reported by the SRS in their Annual Site Environmental Report currently considers only the counting uncertainty in the measurements, which is the standard reporting protocol for radioanalytical chemistry results. The focus of this work is to provide an overview of all uncertainty components associated with SRS tritium measurements, estimate the total uncertainty according to ISO 17025, and to propose additional experiments to verify some of the estimated uncertainties. The main uncertainty components discovered and investigated in this paper are tritium absorption or desorption in the sample container, HTO/H{sub 2}O isotopic effect during distillation, pipette volume, and tritium standard uncertainty. The goal is to quantify these uncertainties and to establish a combined uncertainty in order to increase the scientific depth of the SRS Annual Site Environmental Report.

  2. The uncertainty of the atmospheric integrated water vapour estimated from GNSS observations

    NASA Astrophysics Data System (ADS)

    Ning, T.; Wang, J.; Elgered, G.; Dick, G.; Wickert, J.; Bradke, M.; Sommer, M.

    2015-08-01

    Within the Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) there is a need for an assessment of the uncertainty in the Integrated Water Vapour (IWV) in the atmosphere estimated from ground-based GNSS observations. All relevant error sources in GNSS-derived IWV is therefore essential to be investigated. We present two approaches, a statistical and a theoretical analysis, for the assessment of the uncertainty of the IWV. It will be implemented to the GNSS IWV data stream for GRUAN in order to obtain a specific uncertainty for each data point. In addition, specific recommendations are made to GRUAN on hardware, software, and data processing practices to minimize the IWV uncertainty. By combining the uncertainties associated with the input variables in the estimations of the IWV, we calculated the IWV uncertainties for several GRUAN sites with different weather conditions. The results show a similar relative importance of all uncertainty contributions where the uncertainties in the Zenith Total Delay (ZTD) dominate the error budget of the IWV contributing with over 75 % to the total IWV uncertainty. The impact of the uncertainty associated with the conversion factor between the IWV and the Zenith Wet Delay (ZWD) is proportional to the amount of water vapour and increases slightly for moist weather conditions. The GRUAN GNSS IWV uncertainty data will provide a quantified confidence to be used for the validation of other measurement techniques, taking the uncertainty into account from diurnal to decadal time scales.

  3. Hybrid processing of stochastic and subjective uncertainty data

    SciTech Connect

    Cooper, J.A.; Ferson, S.; Ginzburg, L.

    1995-11-01

    Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective data uncertainty, based on new ``hybrid number`` approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzy known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective constituents of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are worked, and graphical results are given.

  4. PIV uncertainty quantification by image matching

    NASA Astrophysics Data System (ADS)

    Sciacchitano, Andrea; Wieneke, Bernhard; Scarano, Fulvio

    2013-04-01

    A novel method is presented to quantify the uncertainty of PIV data. The approach is a posteriori, i.e. the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images. The principle of the method relies on the concept of super-resolution: the image pair is matched according to the cross-correlation analysis and the residual distance between matched particle image pairs (particle disparity vector) due to incomplete match between the two exposures is measured. The ensemble of disparity vectors within the interrogation window is analyzed statistically. The dispersion of the disparity vector returns the estimate of the random error, whereas the mean value of the disparity indicates the occurrence of a systematic error. The validity of the working principle is first demonstrated via Monte Carlo simulations. Two different interrogation algorithms are considered, namely the cross-correlation with discrete window offset and the multi-pass with window deformation. In the simulated recordings, the effects of particle image displacement, its gradient, out-of-plane motion, seeding density and particle image diameter are considered. In all cases good agreement is retrieved, indicating that the error estimator is able to follow the trend of the actual error with satisfactory precision. Experiments where time-resolved PIV data are available are used to prove the concept under realistic measurement conditions. In this case the ‘exact’ velocity field is unknown; however a high accuracy estimate is obtained with an advanced interrogation algorithm that exploits the redundant information of highly temporally oversampled data (pyramid correlation, Sciacchitano et al (2012 Exp. Fluids 53 1087-105)). The image-matching estimator returns the instantaneous distribution of the estimated velocity measurement error. The spatial distribution compares very well with that of the actual error with maxima in the

  5. Erythropoietin, uncertainty principle and cancer related anaemia

    PubMed Central

    Clark, Otavio; Adams, Jared R; Bennett, Charles L; Djulbegovic, Benjamin

    2002-01-01

    Background This study was designed to evaluate if erythropoietin (EPO) is effective in the treatment of cancer related anemia, and if its effect remains unchanged when data are analyzed according to various clinical and methodological characteristics of the studies. We also wanted to demonstrate that cumulative meta-analysis (CMA) can be used to resolve uncertainty regarding clinical questions. Methods Systematic Review (SR) of the published literature on the role of EPO in cancer-related anemia. A cumulative meta-analysis (CMA) using a conservative approach was performed to determine the point in time when uncertainty about the effect of EPO on transfusion-related outcomes could be considered resolved. Participants: Patients included in randomized studies that compared EPO versus no therapy or placebo. Main outcome measures: Number of patients requiring transfusions. Results Nineteen trials were included. The pooled results indicated a significant effect of EPO in reducing the number of patients requiring transfusions [odds ratio (OR) = 0.41; 95%CI: 0.33 to 0.5; p < 0.00001;relative risk (RR) = 0.61; 95% CI: 0.54 to 0.68]. The results remain unchanged after the sensitivity analyses were performed according to the various clinical and methodological characteristics of the studies. The heterogeneity was less pronounced when OR was used instead of RR as the measure of the summary point estimate. Analysis according to OR was not heterogeneous, but the pooled RR was highly heterogeneous. A stepwise metaregression analysis did point to the possibility that treatment effect could have been exaggerated by inadequacy in allocation concealment and that larger treatment effects are seen at hb level > 11.5 g/dl. We identified 1995 as the point in time when a statistically significant effect of EPO was demonstrated and after which we considered that uncertainty about EPO efficacy was resolved. Conclusion EPO is effective in the treatment of anemia in cancer patients. This

  6. Entropic uncertainty from effective anticommutators

    NASA Astrophysics Data System (ADS)

    Kaniewski, Jedrzej; Tomamichel, Marco; Wehner, Stephanie

    2014-07-01

    We investigate entropic uncertainty relations for two or more binary measurements, for example, spin-1/2 or polarization measurements. We argue that the effective anticommutators of these measurements, i.e., the anticommutators evaluated on the state prior to measuring, are an expedient measure of measurement incompatibility. Based on the knowledge of pairwise effective anticommutators we derive a class of entropic uncertainty relations in terms of conditional Rényi entropies. Our uncertainty relations are formulated in terms of effective measures of incompatibility, which can be certified in a device-independent fashion. Consequently, we discuss potential applications of our findings to device-independent quantum cryptography. Moreover, to investigate the tightness of our analysis we consider the simplest (and very well studied) scenario of two measurements on a qubit. We find that our results outperform the celebrated bound due to Maassen and Uffink [Phys. Rev. Lett. 60, 1103 (1988), 10.1103/PhysRevLett.60.1103] and provide an analytical expression for the minimum uncertainty which also outperforms some recent bounds based on majorization.

  7. Quantification of entanglement via uncertainties

    SciTech Connect

    Klyachko, Alexander A.; Oeztop, Baris; Shumovsky, Alexander S.

    2007-03-15

    We show that entanglement of pure multiparty states can be quantified by means of quantum uncertainties of certain basic observables through the use of a measure that was initially proposed by Klyachko et al. [Appl. Phys. Lett. 88, 124102 (2006)] for bipartite systems.

  8. Impact of orifice metering uncertainties

    SciTech Connect

    Stuart, J.W. )

    1990-12-01

    In a recent utility study, attributed 38% of its unaccounted-for UAF gas to orifice metering uncertainty biasing caused by straightening vanes. How this was determined and how this applied to the company's orifice meters is described. Almost all (97%) of the company's UAF gas was found to be attributed to identifiable accounting procedures, measurement problems, theft and leakage.

  9. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  10. Uncertainty quantification in virtual surgery predictions for single ventricle palliation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2014-11-01

    Hemodynamic results from numerical simulations of physiology in patients are invariably presented as deterministic quantities without assessment of associated confidence. Recent advances in cardiovascular simulation and Uncertainty Analysis can be leveraged to challenge this paradigm and to quantify the variability of output quantities of interest, of paramount importance to complement clinical decision making. Physiological variability and errors are responsible for the uncertainty typically associated with measurements in the clinic; starting from a characterization of these quantities in probability, we present applications in the context of estimating the distributions of lumped parameters in 0D models of single-ventricle circulation. We also present results in virtual Fontan palliation surgery, where the variability of both local and systemic hemodynamic indicators is inferred from the uncertainty in pre-operative clinical measurements. Efficient numerical algorithms are required to mitigate the computational cost of propagating the uncertainty through multiscale coupled 0D-3D models of pulsatile flow at the cavopulmonary connection. This work constitutes a first step towards systematic application of robust numerical simulations to virtual surgery predictions.

  11. Differentiating intolerance of uncertainty from three related but distinct constructs.

    PubMed

    Rosen, Natalie O; Ivanova, Elena; Knäuper, Bärbel

    2014-01-01

    Individual differences in uncertainty have been associated with heightened anxiety, stress and approach-oriented coping. Intolerance of uncertainty (IU) is a trait characteristic that arises from negative beliefs about uncertainty and its consequences. Researchers have established the central role of IU in the development of problematic worry and maladaptive coping, highlighting the importance of this construct to anxiety disorders. However, there is a need to improve our understanding of the phenomenology of IU. The goal of this paper was to present hypotheses regarding the similarities and differences between IU and three related constructs--intolerance of ambiguity, uncertainty orientation, and need for cognitive closure--and to call for future empirical studies to substantiate these hypotheses. To assist with achieving this goal, we conducted a systematic review of the literature, which also served to identify current gaps in knowledge. This paper differentiates these constructs by outlining each definition and general approaches to assessment, reviewing the existing empirical relations, and proposing theoretical similarities and distinctions. Findings may assist researchers in selecting the appropriate construct to address their research questions. Future research directions for the application of these constructs, particularly within the field of clinical and health psychology, are discussed.

  12. Analysis and quantification of uncertainty for climate change decision aids for energy consumption in the southwestern US

    NASA Astrophysics Data System (ADS)

    Apling, D.; Higgins, G.; Kiley, H.; Darmenova, K.

    2010-12-01

    Outputs from current generation General Circulation Models (GCMs) are being downscaled to produce primary adaptation products used to investigate the utility of such products for aiding end-user decisions on enduring energy infrastructure as well as short and long-term production and conservation policy alternatives. Effective decision support products carry representations of both objective and subjective uncertainty through to the end-user, allowing them to appropriately weigh climate change factors in with myriad other relevant elements of their decisions. To that end, our methodology compares GCMs and their corresponding downscales with relevant objective historical measurements, systematically evaluates these products with respect to common baseline datasets, and then applies statistical bias correction and uncertainty analysis to establish objective confidence intervals. These primary products are then used to drive empirical energy consumption models fitted to end-user supplied energy consumption records, and undergo additional data quality control and model parameter uncertainty analysis. Our results show a high degree of fit for natural gas consumption involved in facility heating, a lesser degree for electrical power consumption for both heating and cooling, and illustrate a climate-change signal from the European Center/Hamburg Model (ECHAM5) GCM constrained Weather Research and Forecasting (WRF) runs for a parallel current period of 1999-2009, and a future period for 2029-2039; along with confidence bounds on monthly and cumulative annual changes base on these statistical analyses expressed as changes in millions of cubic feet of natural gas and megawatt-hours of electrical energy. Results indicate that our bias corrected energy products provide the necessary detailed output and uncertainty bounds required by public and private facility planners to support and develop their long term energy strategies.

  13. Uncertainty in measurement of protein circular dichroism spectra

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Ravi, Jascindra; Rakowska, Paulina D.; Knight, Alex E.

    2014-02-01

    Circular dichroism (CD) spectroscopy of proteins is widely used to measure protein secondary structure, and to detect changes in secondary and higher orders of structure, for applications in research and in the quality control of protein products such as biopharmaceuticals. However, objective comparison of spectra is challenging because of a limited quantitative understanding of the sources of error in the measurement. Statistical methods can be used for comparisons, but do not provide a mechanism for dealing with systematic, as well as random, errors. Here we present a measurement model for CD spectroscopy of proteins, incorporating the principal sources of uncertainty, and use the model in conjunction with experimental data to derive an uncertainty budget. We show how this approach could be used in practice for the objective comparison of spectra, and discuss the benefits and limitations of this strategy.

  14. Intrinsic uncertainty on the nature of dark energy

    NASA Astrophysics Data System (ADS)

    Valkenburg, Wessel; Kunz, Martin; Marra, Valerio

    2013-12-01

    We argue that there is an intrinsic noise on measurements of the equation of state parameter w = p/ρ from large-scale structure around us. The presence of the large-scale structure leads to an ambiguity in the definition of the background universe and thus there is a maximal precision with which we can determine the equation of state of dark energy. To study the uncertainty due to local structure, we model density perturbations stemming from a standard inflationary power spectrum by means of the exact Lemaître-Tolman-Bondi solution of Einstein’s equation, and show that the usual distribution of matter inhomogeneities in a ΛCDM cosmology causes a variation of w - as inferred from distance measures - of several percent. As we observe only one universe, or equivalently because of the cosmic variance, this uncertainty is systematic in nature.

  15. Sense of control under uncertainty depends on people's childhood environment: a life history theory approach.

    PubMed

    Mittal, Chiraag; Griskevicius, Vladas

    2014-10-01

    Past research found that environmental uncertainty leads people to behave differently depending on their childhood environment. For example, economic uncertainty leads people from poor childhoods to become more impulsive while leading people from wealthy childhoods to become less impulsive. Drawing on life history theory, we examine the psychological mechanism driving such diverging responses to uncertainty. Five experiments show that uncertainty alters people's sense of control over the environment. Exposure to uncertainty led people from poorer childhoods to have a significantly lower sense of control than those from wealthier childhoods. In addition, perceptions of control statistically mediated the effect of uncertainty on impulsive behavior. These studies contribute by demonstrating that sense of control is a psychological driver of behaviors associated with fast and slow life history strategies. We discuss the implications of this for theory and future research, including that environmental uncertainty might lead people who grew up poor to quit challenging tasks sooner than people who grew up wealthy.

  16. Sense of control under uncertainty depends on people's childhood environment: a life history theory approach.

    PubMed

    Mittal, Chiraag; Griskevicius, Vladas

    2014-10-01

    Past research found that environmental uncertainty leads people to behave differently depending on their childhood environment. For example, economic uncertainty leads people from poor childhoods to become more impulsive while leading people from wealthy childhoods to become less impulsive. Drawing on life history theory, we examine the psychological mechanism driving such diverging responses to uncertainty. Five experiments show that uncertainty alters people's sense of control over the environment. Exposure to uncertainty led people from poorer childhoods to have a significantly lower sense of control than those from wealthier childhoods. In addition, perceptions of control statistically mediated the effect of uncertainty on impulsive behavior. These studies contribute by demonstrating that sense of control is a psychological driver of behaviors associated with fast and slow life history strategies. We discuss the implications of this for theory and future research, including that environmental uncertainty might lead people who grew up poor to quit challenging tasks sooner than people who grew up wealthy. PMID:25133717

  17. Probabilistic uncertainty analysis of an FRF of a structure using a Gaussian process emulator

    NASA Astrophysics Data System (ADS)

    Fricker, Thomas E.; Oakley, Jeremy E.; Sims, Neil D.; Worden, Keith

    2011-11-01

    This paper introduces methods for probabilistic uncertainty analysis of a frequency response function (FRF) of a structure obtained via a finite element (FE) model. The methods are applicable to computationally expensive FE models, making use of a Bayesian metamodel known as an emulator. The emulator produces fast predictions of the FE model output, but also accounts for the additional uncertainty induced by only having a limited number of model evaluations. Two approaches to the probabilistic uncertainty analysis of FRFs are developed. The first considers the uncertainty in the response at discrete frequencies, giving pointwise uncertainty intervals. The second considers the uncertainty in an entire FRF across a frequency range, giving an uncertainty envelope function. The methods are demonstrated and compared to alternative approaches in a practical case study.

  18. Systematic study of (α ,γ ) reactions for stable nickel isotopes

    NASA Astrophysics Data System (ADS)

    Simon, A.; Beard, M.; Spyrou, A.; Quinn, S. J.; Bucher, B.; Couder, M.; DeYoung, P. A.; Dombos, A. C.; Görres, J.; Kontos, A.; Long, A.; Moran, M. T.; Paul, N.; Pereira, J.; Robertson, D.; Smith, K.; Stech, E.; Talwar, R.; Tan, W. P.; Wiescher, M.

    2015-08-01

    A systematic measurement of the (α ,γ ) reaction for all the stable nickel isotopes has been performed using the γ -summing technique. For two of the isotopes, 60Ni and 61Ni, the α -capture cross sections have been experimentally measured for the first time. For 58,62,64Ni, the current measurement is in excellent agreement with earlier results found in the literature, and additionally extends the energy range of the measured cross sections up to 8.7 MeV. The data provided a tool for testing the cross section predictions of Hauser-Feshbach calculations. The experimental results were compared to the cross sections calculated with the talys 1.6 code and commonly used databases non-smoker and bruslib. For each of the investigated isotopes a combination of input parameter for talys was identified that best reproduces the experimental data, and recommended reaction rate has been calculated. Additionally, a set of inputs for Hauser-Feshbach calculations was given that, simultaneously for all the isotopes under consideration, reproduces the experimental data within the experimental uncertainties.

  19. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  20. Methods for displaying macromolecular structural uncertainty: application to the globins.

    PubMed

    Altman, R B; Hughes, C; Gerstein, M B

    1995-06-01

    Most molecular graphics programs ignore any uncertainty in the atomic coordinates being displayed. Structures are displayed in terms of perfect points, spheres, and lines with no uncertainty. However, all experimental methods for defining structures, and many methods for predicting and comparing structures, associate uncertainties with each atomic coordinate. We have developed graphical representations that highlight these uncertainties. These representations are encapsulated in a new interactive display program, PROTEAND. PROTEAND represents structural uncertainty in three ways: (1) The traditional way: The program shows a collection of structures as superposed and overlapped stick-figure models. (2) Ellipsoids: At each atom position, the program shows an ellipsoid derived from a three-dimensional Gaussian model of uncertainty. This probabilistic model provides additional information about the relationship between atoms that can be displayed as a correlation matrix. (3) Rigid-body volumes: Using clouds of dots, the program can show the range of rigid-body motion of selected substructures, such as individual alpha helices. We illustrate the utility of these display modalities by the applying PROTEAND to the globin family of proteins, and show that certain types of structural variation are best illustrated with different methods of display.

  1. Structural uncertainty in watershed phosphorus modeling: Toward a stochastic framework

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Gong, Yongwei; Shen, Zhenyao

    2016-06-01

    Structural uncertainty is an important source of model predictive errors, but few studies have been conducted on the error-transitivity from model structure to nonpoint source (NPS) prediction. In this study, we focused on the structural uncertainty caused by the algorithms and equations that are used to describe the phosphorus (P) cycle at the watershed scale. The sensitivity of simulated P to each algorithm/equation was quantified using the Soil and Water Assessment Tool (SWAT) in the Three Gorges Reservoir Area, China. The results indicated that the ratios of C:N and P:N for humic materials, as well as the algorithm of fertilization and P leaching contributed the largest output uncertainties. In comparison, the initiation of inorganic P in the soil layer and the transformation algorithm between P pools are less sensitive for the NPS-P predictions. In addition, the coefficient of variation values were quantified as 0.028-0.086, indicating that the structure-induced uncertainty is minor compared to NPS-P prediction uncertainty caused by the model input and parameters. Using the stochastic framework, the cumulative probability of simulated NPS-P data provided a trade-off between expenditure burden and desired risk. In this sense, this paper provides valuable information for the control of model structural uncertainty, and can be extrapolated to other model-based studies.

  2. Considering rating curve uncertainty in water level predictions

    NASA Astrophysics Data System (ADS)

    Sikorska, A. E.; Scheidegger, A.; Banasik, K.; Rieckermann, J.

    2013-11-01

    Streamflow cannot be measured directly and is typically derived with a rating curve model. Unfortunately, this causes uncertainties in the streamflow data and also influences the calibration of rainfall-runoff models if they are conditioned on such data. However, it is currently unknown to what extent these uncertainties propagate to rainfall-runoff predictions. This study therefore presents a quantitative approach to rigorously consider the impact of the rating curve on the prediction uncertainty of water levels. The uncertainty analysis is performed within a formal Bayesian framework and the contributions of rating curve versus rainfall-runoff model parameters to the total predictive uncertainty are addressed. A major benefit of the approach is its independence from the applied rainfall-runoff model and rating curve. In addition, it only requires already existing hydrometric data. The approach was successfully demonstrated on a small catchment in Poland, where a dedicated monitoring campaign was performed in 2011. The results of our case study indicate that the uncertainty in calibration data derived by the rating curve method may be of the same relevance as rainfall-runoff model parameters themselves. A conceptual limitation of the approach presented is that it is limited to water level predictions. Nevertheless, regarding flood level predictions, the Bayesian framework seems very promising because it (i) enables the modeler to incorporate informal knowledge from easily accessible information and (ii) better assesses the individual error contributions. Especially the latter is important to improve the predictive capability of hydrological models.

  3. Considering rating curve uncertainty in water level predictions

    NASA Astrophysics Data System (ADS)

    Sikorska, A. E.; Scheidegger, A.; Banasik, K.; Rieckermann, J.

    2013-03-01

    Streamflow cannot be measured directly and is typically derived with a rating curve model. Unfortunately, this causes uncertainties in the streamflow data and also influences the calibration of rainfall-runoff models if they are conditioned on such data. However, it is currently unknown to what extent these uncertainties propagate to rainfall-runoff predictions. This study therefore presents a quantitative approach to rigorously consider the impact of the rating curve on the prediction uncertainty of water levels. The uncertainty analysis is performed within a formal Bayesian framework and the contributions of rating curve versus rainfall-runoff model parameters to the total predictive uncertainty are addressed. A major benefit of the approach is its independence from the applied rainfall-runoff model and rating curve. In addition, it only requires already existing hydrometric data. The approach was successfully tested on a small urbanized basin in Poland, where a dedicated monitoring campaign was performed in 2011. The results of our case study indicate that the uncertainty in calibration data derived by the rating curve method may be of the same relevance as rainfall-runoff model parameters themselves. A conceptual limitation of the approach presented is that it is limited to water level predictions. Nevertheless, regarding flood level predictions, the Bayesian framework seems very promising because it (i) enables the modeler to incorporate informal knowledge from easily accessible information and (ii) better assesses the individual error contributions. Especially the latter is important to improve the predictive capability of hydrological models.

  4. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  5. Quantification of uncertainty in geochemical reactions

    NASA Astrophysics Data System (ADS)

    Srinivasan, Gowri; Tartakovsky, Daniel M.; Robinson, Bruce A.; Aceves, Alejandro B.

    2007-12-01

    Predictions of reactive transport in the subsurface are routinely compromised by both model (structural) and parametric uncertainties. We present a set of computational tools for quantifying these two types of uncertainties. The model uncertainty is resolved at the molecular scale where epistemic uncertainty incorporates aleatory uncertainty. The parametric uncertainty is resolved at both molecular and continuum (Darcy) scales. We use the proposed approach to quantify uncertainty in modeling the sorption of neptunium through a competitive ion exchange. This radionuclide is of major concern for various high-level waste storage projects because of its relatively long half-life and its high-solubility and low-sorption properties. We demonstrate how parametric and model uncertainties affect one's ability to estimate the distribution coefficient. The uncertainty quantification tools yield complete probabilistic descriptions of key parameters affecting the fate and migration of neptunium in the subsurface rather than the lower statistical moments. This is important, since these distributions are highly skewed.

  6. Toward a definition of intolerance of uncertainty: a review of factor analytical studies of the Intolerance of Uncertainty Scale.

    PubMed

    Birrell, Jane; Meares, Kevin; Wilkinson, Andrew; Freeston, Mark

    2011-11-01

    Since its emergence in the early 1990s, a narrow but concentrated body of research has developed examining the role of intolerance of uncertainty (IU) in worry, and yet we still know little about its phenomenology. In an attempt to clarify our understanding of this construct, this paper traces the way in which our understanding and definition of IU have evolved throughout the literature. This paper also aims to further our understanding of IU by exploring the latent variables measures by the Intolerance of Uncertainty Scale (IUS; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994). A review of the literature surrounding IU confirmed that the current definitions are categorical and lack specificity. A critical review of existing factor analytic studies was carried out in order to determine the underlying factors measured by the IUS. Systematic searches yielded 9 papers for review. Two factors with 12 consistent items emerged throughout the exploratory studies, and the stability of models containing these two factors was demonstrated in subsequent confirmatory studies. It is proposed that these factors represent (i) desire for predictability and an active engagement in seeking certainty, and (ii) paralysis of cognition and action in the face of uncertainty. It is suggested that these factors may represent approach and avoidance responses to uncertainty. Further research is required to confirm the construct validity of these factors and to determine the stability of this structure within clinical samples.

  7. Systematic Risk-Taking.

    ERIC Educational Resources Information Center

    Neihart, Maureen

    1999-01-01

    Describes systematic risk-taking, a strategy designed to develop skills and increase self-esteem, confidence, and courage in gifted youth. The steps of systematic risk-taking include understanding the benefits, initial self-assessment for risk-taking categories, identifying personal needs, determining a risk to take, taking the risk, and…

  8. Uncertainty in mapping urban air quality using crowdsourcing techniques

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena

    2016-04-01

    Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty

  9. Uncertainty quantification for systems of conservation laws

    SciTech Connect

    Poette, Gael Despres, Bruno Lucor, Didier

    2009-04-20

    Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases.

  10. Video Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis: Preprint

    SciTech Connect

    Lewandowski, A.; Gray, A.

    2010-10-01

    This purely analytical work is based primarily on the geometric optics of the system and shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough test. In this paper, we include both the random (precision) and systematic (bias) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors that we considered in this study are target tilt, target face to laser output distance, instrument vertical offset, scanner tilt, distance between the tool and the test piece, camera calibration, and scanner/calibration.

  11. Uncertainty in Measured Data and Model Predictions: Essential Components for Mobilizing Environmental Data and Modeling

    NASA Astrophysics Data System (ADS)

    Harmel, D.

    2014-12-01

    In spite of pleas for uncertainty analysis - such as Beven's (2006) "Should it not be required that every paper in both field and modeling studies attempt to evaluate the uncertainty in the results?" - the uncertainty associated with hydrology and water quality data is rarely quantified and rarely considered in model evaluation. This oversight, justified in the past by mainly tenuous philosophical concerns, diminishes the value of measured data and ignores the environmental and socio-economic benefits of improved decisions and policies based on data with estimated uncertainty. This oversight extends to researchers, who typically fail to estimate uncertainty in measured discharge and water quality data because of additional effort required, lack of adequate scientific understanding on the subject, and fear of negative perception if data with "high" uncertainty are reported; however, the benefits are certain. Furthermore, researchers have a responsibility for scientific integrity in reporting what is known and what is unknown, including the quality of measured data. In response we produced an uncertainty estimation framework and the first cumulative uncertainty estimates for measured water quality data (Harmel et al., 2006). From that framework, DUET-H/WQ was developed (Harmel et al., 2009). Application to several real-world data sets indicated that substantial uncertainty can be contributed by each data collection procedural category and that uncertainties typically occur in order discharge < sediment < dissolved N and P < total N and P. Similarly, modelers address certain aspects of model uncertainty but ignore others, such as the impact of uncertainty in discharge and water quality data. Thus, we developed methods to incorporate prediction uncertainty as well as calibration/validation data uncertainty into model goodness-of-fit evaluation (Harmel and Smith, 2007; Harmel et al., 2010). These enhance model evaluation by: appropriately sharing burden with "data

  12. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  13. Accounting for uncertainty in the quantification of the environmental impacts of Canadian pig farming systems.

    PubMed

    Mackenzie, S G; Leinonen, I; Ferguson, N; Kyriazakis, I

    2015-06-01

    The objective of the study was to develop a life cycle assessment (LCA) for pig farming systems that would account for uncertainty and variability in input data and allow systematic environmental impact comparisons between production systems. The environmental impacts of commercial pig production for 2 regions in Canada (Eastern and Western) were compared using a cradle-to-farm gate LCA. These systems had important contrasting characteristics such as typical feed ingredients used, herd performance, and expected emission factors from manure management. The study used detailed production data supplied by the industry and incorporated uncertainty/variation in all major aspects of the system including life cycle inventory data for feed ingredients, animal performance, energy inputs, and emission factors. The impacts were defined using 5 metrics-global warming potential, acidification potential, eutrophication potential (EP), abiotic resource use, and nonrenewable energy use-and were expressed per kilogram carcass weight at farm gate. Eutrophication potential was further separated into marine EP (MEP) and freshwater EP (FEP). Uncertainties in the model inputs were separated into 2 types: uncertainty in the data used to describe the system (α uncertainties) and uncertainty in impact calculations or background data that affects all systems equally (β uncertainties). The impacts of pig production in the 2 regions were systematically compared based on the differences in the systems (α uncertainties). The method of ascribing uncertainty influenced the outcomes. In eastern systems, EP, MEP, and FEP were lower (P < 0.05) when assuming that all uncertainty in the emission factors for leaching from manure application was β. This was mainly due to increased EP resulting from field emissions for typical ingredients in western diets. When uncertainty in these emission factors was assumed to be α, only FEP was lower in eastern systems (P < 0.05). The environmental impacts for

  14. Accounting for uncertainty in the quantification of the environmental impacts of Canadian pig farming systems.

    PubMed

    Mackenzie, S G; Leinonen, I; Ferguson, N; Kyriazakis, I

    2015-06-01

    The objective of the study was to develop a life cycle assessment (LCA) for pig farming systems that would account for uncertainty and variability in input data and allow systematic environmental impact comparisons between production systems. The environmental impacts of commercial pig production for 2 regions in Canada (Eastern and Western) were compared using a cradle-to-farm gate LCA. These systems had important contrasting characteristics such as typical feed ingredients used, herd performance, and expected emission factors from manure management. The study used detailed production data supplied by the industry and incorporated uncertainty/variation in all major aspects of the system including life cycle inventory data for feed ingredients, animal performance, energy inputs, and emission factors. The impacts were defined using 5 metrics-global warming potential, acidification potential, eutrophication potential (EP), abiotic resource use, and nonrenewable energy use-and were expressed per kilogram carcass weight at farm gate. Eutrophication potential was further separated into marine EP (MEP) and freshwater EP (FEP). Uncertainties in the model inputs were separated into 2 types: uncertainty in the data used to describe the system (α uncertainties) and uncertainty in impact calculations or background data that affects all systems equally (β uncertainties). The impacts of pig production in the 2 regions were systematically compared based on the differences in the systems (α uncertainties). The method of ascribing uncertainty influenced the outcomes. In eastern systems, EP, MEP, and FEP were lower (P < 0.05) when assuming that all uncertainty in the emission factors for leaching from manure application was β. This was mainly due to increased EP resulting from field emissions for typical ingredients in western diets. When uncertainty in these emission factors was assumed to be α, only FEP was lower in eastern systems (P < 0.05). The environmental impacts for

  15. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  16. Credible Software and Simulation Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; Nixon, David (Technical Monitor)

    1998-01-01

    The utility of software primarily depends on its reliability and performance; whereas, its significance depends solely on its credibility for intended use. The credibility of simulations confirms the credibility of software. The level of veracity and the level of validity of simulations determine the degree of credibility of simulations. The process of assessing this credibility in fields such as computational mechanics (CM) differs from that followed by the Defense Modeling and Simulation Office in operations research. Verification and validation (V&V) of CM simulations is not the same as V&V of CM software. Uncertainty is the measure of simulation credibility. Designers who use software are concerned with management of simulation uncertainty. Terminology and concepts are presented with a few examples from computational fluid dynamics.

  17. Uncertainty versus computer response time

    SciTech Connect

    Rowe, W.D. |

    1994-12-31

    Interactive on-line presentation of risk analysis results with immediate ``what if`` capability is now possible with available microcomputer technology. This can provide an effective means off presenting the risk results, the decision possibilities, and the underlying assumptions to decision makers, stakeholders, and the public. However, the limitation of computer calculational power on microcomputers requires a trade-off between the precision of the analysis and the computing and display response time. Fortunately, the uncertainties in the risk analysis are usually so large that extreme precision is often unwarranted. Therefore, risk analyses used for this purpose must include trade-offs between precision and processing time, and uncertainties introduced must be put into perspective.

  18. Aspects of complementarity and uncertainty

    NASA Astrophysics Data System (ADS)

    Vathsan, Radhika; Qureshi, Tabish

    2016-08-01

    The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.

  19. Uncertainty analysis and robust trajectory linearization control of a flexible air-breathing hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang

    2014-08-01

    Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.

  20. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  1. Age models and their uncertainties

    NASA Astrophysics Data System (ADS)

    Marwan, N.; Rehfeld, K.; Goswami, B.; Breitenbach, S. F. M.; Kurths, J.

    2012-04-01

    The usefulness of a proxy record is largely dictated by accuracy and precision of its age model, i.e., its depth-age relationship. Only if age model uncertainties are minimized correlations or lead-lag relations can be reliably studied. Moreover, due to different dating strategies (14C, U-series, OSL dating, or counting of varves), dating errors or diverging age models lead to difficulties in comparing different palaeo proxy records. Uncertainties in the age model are even more important if an exact dating is necessary in order to calculate, e.g., data series of flux or rates (like dust flux records, pollen deposition rates). Several statistical approaches exist to handle the dating uncertainties themselves and to estimate the age-depth relationship. Nevertheless, linear interpolation is still the most commonly used method for age modeling. The uncertainties of a certain event at a given time due to the dating errors are often even completely neglected. Here we demonstrate the importance of considering dating errors and implications for the interpretation of variations in palaeo-climate proxy records from stalagmites (U-series dated). We present a simple approach for estimating age models and their confidence levels based on Monte Carlo methods and non-linear interpolation. This novel algorithm also allows for removing age reversals. Our approach delivers a time series of a proxy record with a value range for each age depth also, if desired, on an equidistant time axis. The algorithm is implemented in interactive scripts for use with MATLAB®, Octave, and FreeMat.

  2. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  3. Confronting dynamics and uncertainty in optimal decision making for conservation

    NASA Astrophysics Data System (ADS)

    Williams, Byron K.; Johnson, Fred A.

    2013-06-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  4. Confronting dynamics and uncertainty in optimal decision making for conservation

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2013-01-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  5. Blade tip timing (BTT) uncertainties

    NASA Astrophysics Data System (ADS)

    Russhard, Pete

    2016-06-01

    Blade Tip Timing (BTT) is an alternative technique for characterising blade vibration in which non-contact timing probes (e.g. capacitance or optical probes), typically mounted on the engine casing (figure 1), and are used to measure the time at which a blade passes each probe. This time is compared with the time at which the blade would have passed the probe if it had been undergoing no vibration. For a number of years the aerospace industry has been sponsoring research into Blade Tip Timing technologies that have been developed as tools to obtain rotor blade tip deflections. These have been successful in demonstrating the potential of the technology, but rarely produced quantitative data, along with a demonstration of a traceable value for measurement uncertainty. BTT technologies have been developed under a cloak of secrecy by the gas turbine OEM's due to the competitive advantages it offered if it could be shown to work. BTT measurements are sensitive to many variables and there is a need to quantify the measurement uncertainty of the complete technology and to define a set of guidelines as to how BTT should be applied to different vehicles. The data shown in figure 2 was developed from US government sponsored program that bought together four different tip timing system and a gas turbine engine test. Comparisons showed that they were just capable of obtaining measurement within a +/-25% uncertainty band when compared to strain gauges even when using the same input data sets.

  6. Uncertainty propagation in nuclear forensics.

    PubMed

    Pommé, S; Jerome, S M; Venchiarutti, C

    2014-07-01

    Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent-daughter pairs and the need for more precise half-life data is examined.

  7. Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Elshall, Ahmed S.

    2013-09-01

    Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.

  8. Intolerance of uncertainty and adult separation anxiety.

    PubMed

    Boelen, Paul A; Reijntjes, Albert; Carleton, R Nicholas

    2014-01-01

    Intolerance of uncertainty (IU)-the tendency to react negatively to situations that are uncertain-is involved in different anxiety disorders and depression. No studies have yet examined the association between IU and symptoms of adult separation anxiety disorder. However, it is possible that greater difficulties tolerating uncertainties that can occur in relationships with attachment figures inflate fears and worries about the consequences of being separated from these attachment figures. The current study examined the possible role of IU in symptoms of adult separation anxiety disorder, relative to its role in symptoms of generalized anxiety disorder (GAD), obsessive compulsive disorder (OCD), social anxiety, and depression, using self-reported data from 215 undergraduates (92% women) with elevated separation anxiety. Findings showed that IU was significantly associated with symptom levels of separation anxiety disorder, GAD, OCD, social anxiety, and depression (rs > .30). IU continued to explain variance in OCD, social anxiety, and depression (but not GAD and separation anxiety) when controlling for the association of neuroticism, attachment anxiety, and attachment avoidance with these symptoms. Additional findings indicated that IU is more strongly associated with symptoms of GAD, OCD, and social anxiety than symptoms of adult separation anxiety disorder and depression.

  9. Communicating Uncertainties for Microwave-Based ESDRs

    NASA Astrophysics Data System (ADS)

    Wentz, F. J.; Mears, C. A.; Smith, D. K.

    2011-12-01

    Currently as part of NASA's MEaSUREs program, there is a 25-year archive of consistently-processed and carefully inter-calibrated Earth Science Data Records (ESDR) consisting of geophysical products derived from satellite microwave radiometers. These products include ocean surface temperature and wind speed, total atmospheric water vapor and cloud water, surface rain rate, and deep-layer averages of atmospheric temperature. The product retrievals are based on a radiative transfer model (RTM) for the surface and intervening atmosphere. Thus, the accuracy of the retrieved products depends both on the accuracy of the RTM, the accuracy of the measured brightness temperatures that serve as inputs to the retrieval algorithm, and on the accuracy of any ancillary data used to adjust for unmeasured geophysical conditions. In addition, for gridded products that are averages over time or space, sampling error can become important. It is important not only to calculate the uncertainties associated with the ESDRs but also to effectively communicate these uncertainties to the Users in a way that is helpful for their particular set of applications. This is a challenging task that will require a multi-faceted approach consisting of (1) error bars assigned to each retrieval, (2) detailed interactive validation reports, and (3) peer-reviewed scientific papers on long-term trends. All of this information needs to be linked to the ESDR's in a manner that facilitates integration into the User's applications. Our talk will discuss the progress we are making in implementing these approaches.

  10. Uncertainty of rotating shadowband irradiometers and Si-pyranometers including the spectral irradiance error

    NASA Astrophysics Data System (ADS)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank

    2016-05-01

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.

  11. An update on the uncertainties of water vapor measurements using cryogenic frost point hygrometers

    NASA Astrophysics Data System (ADS)

    Vömel, Holger; Naebert, Tatjana; Dirksen, Ruud; Sommer, Michael

    2016-08-01

    Long time series of observations of essential climate variables in the troposphere and stratosphere are often impacted by inconsistencies in instrumentation and ambiguities in the interpretation of the data. To reduce these problems of long-term data series, all measurements should include an estimate of their uncertainty and a description of their sources. Here we present an update of the uncertainties for tropospheric and stratospheric water vapor observations using the cryogenic frost point hygrometer (CFH). The largest source of measurement uncertainty is the controller stability, which is discussed here in detail. We describe a method to quantify this uncertainty for each profile based on the measurements. We also show the importance of a manufacturer-independent ground check, which is an essential tool to continuously monitor the uncertainty introduced by instrument variability. A small bias, which has previously been indicated in lower tropospheric measurements, is described here in detail and has been rectified. Under good conditions, the total from all sources of uncertainty of frost point or dew point measurements using the CFH can be better than 0.2 K. Systematic errors, which are most likely to impact long-term climate series, are verified to be less than 0.1 K. The impact of the radiosonde pressure uncertainty on the mixing ratio for properly processed radiosondes is considered small. The mixing ratio uncertainty may be as low as 2 to 3 %. The impact of the ambient temperature uncertainty on relative humidity (RH) is generally larger than that of the frost point uncertainty. The relative RH uncertainty may be as low as 2 % in the lower troposphere and 5 % in the tropical tropopause region.

  12. Uncertainty in homology inferences: Assessing and improving genomic sequence alignment

    PubMed Central

    Lunter, Gerton; Rocco, Andrea; Mimouni, Naila; Heger, Andreas; Caldeira, Alexandre; Hein, Jotun

    2008-01-01

    Sequence alignment underpins all of comparative genomics, yet it remains an incompletely solved problem. In particular, the statistical uncertainty within inferred alignments is often disregarded, while parametric or phylogenetic inferences are considered meaningless without confidence estimates. Here, we report on a theoretical and simulation study of pairwise alignments of genomic DNA at human–mouse divergence. We find that >15% of aligned bases are incorrect in existing whole-genome alignments, and we identify three types of alignment error, each leading to systematic biases in all algorithms considered. Careful modeling of the evolutionary process improves alignment quality; however, these improvements are modest compared with the remaining alignment errors, even with exact knowledge of the evolutionary model, emphasizing the need for statistical approaches to account for uncertainty. We develop a new algorithm, Marginalized Posterior Decoding (MPD), which explicitly accounts for uncertainties, is less biased and more accurate than other algorithms we consider, and reduces the proportion of misaligned bases by a third compared with the best existing algorithm. To our knowledge, this is the first nonheuristic algorithm for DNA sequence alignment to show robust improvements over the classic Needleman–Wunsch algorithm. Despite this, considerable uncertainty remains even in the improved alignments. We conclude that a probabilistic treatment is essential, both to improve alignment quality and to quantify the remaining uncertainty. This is becoming increasingly relevant with the growing appreciation of the importance of noncoding DNA, whose study relies heavily on alignments. Alignment errors are inevitable, and should be considered when drawing conclusions from alignments. Software and alignments to assist researchers in doing this are provided at http://genserv.anat.ox.ac.uk/grape/. PMID:18073381

  13. Measurement Issues for Energy Efficient Commercial Buildings: Productivity and Performance Uncertainties

    SciTech Connect

    Jones, D.W.

    2002-05-16

    In previous reports, we have identified two potentially important issues, solutions to which would increase the attractiveness of DOE-developed technologies in commercial buildings energy systems. One issue concerns the fact that in addition to saving energy, many new technologies offer non-energy benefits that contribute to building productivity (firm profitability). The second issue is that new technologies are typically unproven in the eyes of decision makers and must bear risk premiums that offset cost advantages resulting from laboratory calculations. Even though a compelling case can be made for the importance of these issues, for building decision makers to incorporate them in business decisions and for DOE to use them in R&D program planning there must be robust empirical evidence of their existence and size. This paper investigates how such measurements could be made and offers recommendations as to preferred options. There is currently little systematic information on either of these concepts in the literature. Of the two there is somewhat more information on non-energy benefits, but little as regards office buildings. Office building productivity impacts can be observed casually, but must be estimated statistically, because buildings have many interacting attributes and observations based on direct behavior can easily confuse the process of attribution. For example, absenteeism can be easily observed. However, absenteeism may be down because a more healthy space conditioning system was put into place, because the weather was milder, or because firm policy regarding sick days had changed. There is also a general dearth of appropriate information for purposes of estimation. To overcome these difficulties, we propose developing a new data base and applying the technique of hedonic price analysis. This technique has been used extensively in the analysis of residential dwellings. There is also a literature on its application to commercial and industrial

  14. Two new kinds of uncertainty relations

    NASA Technical Reports Server (NTRS)

    Uffink, Jos

    1994-01-01

    We review a statistical-geometrical and a generalized entropic approach to the uncertainty principle. Both approaches provide a strengthening and generalization of the standard Heisenberg uncertainty relations, but in different directions.

  15. Brief review of uncertainty quantification for particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Farias, M. H.; Teixeira, R. S.; Koiller, J.; Santos, A. M.

    2016-07-01

    Metrological studies for particle image velocimetry (PIV) are recent in literature. An attempt to evaluate the uncertainty quantifications (UQ) of the PIV velocity field are in evidence. Therefore, a short review on main sources of uncertainty in PIV and available methodologies for its quantification are presented. In addition, the potential of some mathematical techniques, coming from the area of geometric mechanics and control, that could interest the fluids UQ community are highlighted as good possibilities. “We must measure what is measurable and make measurable what cannot be measured” (Galileo)

  16. Adaptive second-order sliding mode control with uncertainty compensation

    NASA Astrophysics Data System (ADS)

    Bartolini, G.; Levant, A.; Pisano, A.; Usai, E.

    2016-09-01

    This paper endows the second-order sliding mode control (2-SMC) approach with additional capabilities of learning and control adaptation. We present a 2-SMC scheme that estimates and compensates for the uncertainties affecting the system dynamics. It also adjusts the discontinuous control effort online, so that it can be reduced to arbitrarily small values. The proposed scheme is particularly useful when the available information regarding the uncertainties is conservative, and the classical `fixed-gain' SMC would inevitably lead to largely oversized discontinuous control effort. Benefits from the viewpoint of chattering reduction are obtained, as confirmed by computer simulations.

  17. On scale-dependent cosmic shear systematic effects

    NASA Astrophysics Data System (ADS)

    Kitching, T. D.; Taylor, A. N.; Cropper, M.; Hoekstra, H.; Hood, R. K. E.; Massey, R.; Niemi, S.

    2016-01-01

    In this paper, we investigate the impact that realistic scale-dependent systematic effects may have on cosmic shear tomography. We model spatially varying residual galaxy ellipticity and galaxy size variations in weak lensing measurements and propagate these through to predicted changes in the uncertainty and bias of cosmological parameters. We show that the survey strategy - whether it is regular or randomized - is an important factor in determining the impact of a systematic effect: a purely randomized survey strategy produces the smallest biases, at the expense of larger parameter uncertainties, and a very regularized survey strategy produces large biases, but unaffected uncertainties. However, by removing, or modelling, the affected scales (ℓ-modes) in the regular cases the biases are reduced to negligible levels. We find that the integral of the systematic power spectrum is not a good metric for dark energy performance, and we advocate that systematic effects should be modelled accurately in real space, where they enter the measurement process, and their effect subsequently propagated into power spectrum contributions.

  18. When is enough evidence enough? - Using systematic decision analysis and value-of-information analysis to determine the need for further evidence.

    PubMed

    Siebert, Uwe; Rochau, Ursula; Claxton, Karl

    2013-01-01

    Decision analysis (DA) and value-of-information (VOI) analysis provide a systematic, quantitative methodological framework that explicitly considers the uncertainty surrounding the currently available evidence to guide healthcare decisions. In medical decision making under uncertainty, there are two fundamental questions: 1) What decision should be made now given the best available evidence (and its uncertainty)?; 2) Subsequent to the current decision and given the magnitude of the remaining uncertainty, should we gather further evidence (i.e., perform additional studies), and if yes, which studies should be undertaken (e.g., efficacy, side effects, quality of life, costs), and what sample sizes are needed? Using the currently best available evidence, VoI analysis focuses on the likelihood of making a wrong decision if the new intervention is adopted. The value of performing further studies and gathering additional evidence is based on the extent to which the additional information will reduce this uncertainty. A quantitative framework allows for the valuation of the additional information that is generated by further research, and considers the decision maker's objectives and resource constraints. Claxton et al. summarise: "Value of information analysis can be used to inform a range of policy questions including whether a new technology should be approved based on existing evidence, whether it should be approved but additional research conducted or whether approval should be withheld until the additional evidence becomes available." [Claxton K. Value of information entry in Encyclopaedia of Health Economics, Elsevier, forthcoming 2014.] The purpose of this tutorial is to introduce the framework of systematic VoI analysis to guide further research. In our tutorial article, we explain the theoretical foundations and practical methods of decision analysis and value-of-information analysis. To illustrate, we use a simple case example of a foot ulcer (e.g., with

  19. When is enough evidence enough? - Using systematic decision analysis and value-of-information analysis to determine the need for further evidence.

    PubMed

    Siebert, Uwe; Rochau, Ursula; Claxton, Karl

    2013-01-01

    Decision analysis (DA) and value-of-information (VOI) analysis provide a systematic, quantitative methodological framework that explicitly considers the uncertainty surrounding the currently available evidence to guide healthcare decisions. In medical decision making under uncertainty, there are two fundamental questions: 1) What decision should be made now given the best available evidence (and its uncertainty)?; 2) Subsequent to the current decision and given the magnitude of the remaining uncertainty, should we gather further evidence (i.e., perform additional studies), and if yes, which studies should be undertaken (e.g., efficacy, side effects, quality of life, costs), and what sample sizes are needed? Using the currently best available evidence, VoI analysis focuses on the likelihood of making a wrong decision if the new intervention is adopted. The value of performing further studies and gathering additional evidence is based on the extent to which the additional information will reduce this uncertainty. A quantitative framework allows for the valuation of the additional information that is generated by further research, and considers the decision maker's objectives and resource constraints. Claxton et al. summarise: "Value of information analysis can be used to inform a range of policy questions including whether a new technology should be approved based on existing evidence, whether it should be approved but additional research conducted or whether approval should be withheld until the additional evidence becomes available." [Claxton K. Value of information entry in Encyclopaedia of Health Economics, Elsevier, forthcoming 2014.] The purpose of this tutorial is to introduce the framework of systematic VoI analysis to guide further research. In our tutorial article, we explain the theoretical foundations and practical methods of decision analysis and value-of-information analysis. To illustrate, we use a simple case example of a foot ulcer (e.g., with

  20. Coping with model uncertainty in data assimilation using optimal mass transport

    NASA Astrophysics Data System (ADS)

    Ning, L.; Carli, F. P.; Ebtehaj, M.; Foufoula-Georgiou, E.; Georgiou, T.

    2013-12-01

    Most data assimilation methods address the problem of optimally combining model predictions with observations in the presence of zero-mean Gaussian random errors. However, in many hydro-meteorological applications uncertainty in model parameters and/or model structure often result in systematic errors (bias). Examples include the prediction of precipitation or land surface fluxes at the wrong location and/or timing due to a drift in the model, unknown initial conditions, or non-additive error amplification. Existing bias-aware data assimilation methods require characterization of the bias in terms of a well-defined set of parameters or removal of bias, which is not always feasible. Here we present a new variational data assimilation framework to cope with model bias in a non-parametric fashion via an appropriate 'regularization' of the state evolution dynamics. In the context of weak-constraint 4D-VAR, our method can be seen as enforcing a minimum nonlinear distance (regularization or correction) in the evolution of the state so as to reconcile measurements with errors in the model dynamics. While a quadratic functional is typically sufficient to quantify errors in measurements, errors in state evolution is most naturally quantified by a transportation metric (Wasserstein metric) originating in the theory of Optimal Mass Transport (OMT). The proposed framework allows the use of additional regularization functionals, such as the L1-norm regularization of the state in an appropriately chosen domain, as recently introduced by the authors for states that exhibit sparsity and non-Gaussian priors, such as precipitation and soil moisture. We demonstrate the performance of the proposed method using as an example the 1-D and 2-D advection diffusion equation with systematic errors in the velocity and diffusivity parameters. Extension to real world data assimilation settings is currently under way.

  1. Risk Analysis and Uncertainty: Implications for Counselling

    ERIC Educational Resources Information Center

    Hassenzahl, David

    2004-01-01

    Over the past two decades, the risk analysis community has made substantial advances in understanding and describing uncertainty. Uncertainty is ubiquitous, complex, both quantitative and qualitative in nature, and often irreducible. Uncertainty thus creates a challenge when using risk analysis to evaluate the rationality of group and individual…

  2. Regarding Uncertainty in Teachers and Teaching

    ERIC Educational Resources Information Center

    Helsing, Deborah

    2007-01-01

    The literature on teacher uncertainty suggests that it is a significant and perhaps inherent feature of teaching. Yet there is disagreement about the effects of these uncertainties on teachers as well as about the ways that teachers should regard them. Recognition of uncertainties can be viewed alternatively as a liability or an asset to effective…

  3. Numerical approach for quantification of epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Jakeman, John; Eldred, Michael; Xiu, Dongbin

    2010-06-01

    In the field of uncertainty quantification, uncertainty in the governing equations may assume two forms: aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty can be characterised by known probability distributions whilst epistemic uncertainty arises from a lack of knowledge of probabilistic information. While extensive research efforts have been devoted to the numerical treatment of aleatory uncertainty, little attention has been given to the quantification of epistemic uncertainty. In this paper, we propose a numerical framework for quantification of epistemic uncertainty. The proposed methodology does not require any probabilistic information on uncertain input parameters. The method only necessitates an estimate of the range of the uncertain variables that encapsulates the true range of the input variables with overwhelming probability. To quantify the epistemic uncertainty, we solve an encapsulation problem, which is a solution to the original governing equations defined on the estimated range of the input variables. We discuss solution strategies for solving the encapsulation problem and the sufficient conditions under which the numerical solution can serve as a good estimator for capturing the effects of the epistemic uncertainty. In the case where probability distributions of the epistemic variables become known a posteriori, we can use the information to post-process the solution and evaluate solution statistics. Convergence results are also established for such cases, along with strategies for dealing with mixed aleatory and epistemic uncertainty. Several numerical examples are presented to demonstrate the procedure and properties of the proposed methodology.

  4. Quantum mechanics and the generalized uncertainty principle

    SciTech Connect

    Bang, Jang Young; Berger, Micheal S.

    2006-12-15

    The generalized uncertainty principle has been described as a general consequence of incorporating a minimal length from a theory of quantum gravity. We consider a simple quantum mechanical model where the operator corresponding to position has discrete eigenvalues and show how the generalized uncertainty principle results for minimum uncertainty wave packets.

  5. Optimization of environmental water purchases with uncertainty

    NASA Astrophysics Data System (ADS)

    Hollinshead, Sarah P.; Lund, Jay R.

    2006-08-01

    Water managers are turning increasingly to market solutions to meet new environmental demands for water in fully allocated systems. This paper presents a three-stage probabilistic optimization model that identifies least cost strategies for staged seasonal water purchases for an environmental water acquisition program given hydrologic, operational, and biological uncertainties. Multistage linear programming is used to minimize the expected cost of long-term, spot, and option water purchases used to meet uncertain environmental demands. Results prescribe the location, timing, and type of optimal water purchases and illustrate how least cost strategies change as information becomes available during the year. Results also provide sensitivity analysis, including shadow values that estimate the expected cost of additional dedicated environmental water. The model's application to California's Environmental Water Account is presented with a discussion of its utility for planning and policy purposes. Model limitations and sensitivity analysis are discussed, as are operational and research recommendations.

  6. SPIDER. V. Measuring Systematic Effects in Early-type Galaxy Stellar Masses from Photometric Spectral Energy Distribution Fitting

    NASA Astrophysics Data System (ADS)

    Swindle, R.; Gal, R. R.; La Barbera, F.; de Carvalho, R. R.

    2011-10-01

    We present robust statistical estimates of the accuracy of early-type galaxy stellar masses derived from spectral energy distribution (SED) fitting as functions of various empirical and theoretical assumptions. Using large samples consisting of ~40,000 galaxies from the Sloan Digital Sky Survey (SDSS; ugriz), of which ~5000 are also in the UKIRT Infrared Deep Sky Survey (YJHK), with spectroscopic redshifts in the range 0.05 <= z <= 0.095, we test the reliability of some commonly used stellar population models and extinction laws for computing stellar masses. Spectroscopic ages (t), metallicities (Z), and extinctions (AV ) are also computed from fits to SDSS spectra using various population models. These external constraints are used in additional tests to estimate the systematic errors in the stellar masses derived from SED fitting, where t, Z, and AV are typically left as free parameters. We find reasonable agreement in mass estimates among stellar population models, with variation of the initial mass function and extinction law yielding systematic biases on the mass of nearly a factor of two, in agreement with other studies. Removing the near-infrared bands changes the statistical bias in mass by only ~0.06 dex, adding uncertainties of ~0.1 dex at the 95% CL. In contrast, we find that removing an ultraviolet band is more critical, introducing 2σ uncertainties of ~0.15 dex. Finally, we find that the stellar masses are less affected by the absence of metallicity and/or dust extinction knowledge. However, there is a definite systematic offset in the mass estimate when the stellar population age is unknown, up to a factor of 2.5 for very old (12 Gyr) stellar populations. We present the stellar masses for our sample, corrected for the measured systematic biases due to photometrically determined ages, finding that age errors produce lower stellar masses by ~0.15 dex, with errors of ~0.02 dex at the 95% CL for the median stellar age subsample.

  7. Writing a systematic review.

    PubMed

    Ng, K H; Peh, W C

    2010-05-01

    Evidence-based medicine (EBM) aims to combine the best available scientific evidence with clinical experience and individual judgment of patient needs. In the hierarchy of scientific evidence, systematic reviews (along with meta-analyses) occupy the highest levels in terms of the quality of evidence. A systematic review is the process of searching, selecting, appraising, synthesising and reporting clinical evidence on a particular question or topic. It is currently considered the best, least biased and most rational way to organise, gather, evaluate and integrate scientific evidence from the rapidly-changing medical and healthcare literature. Systematic reviews could be used to present current concepts or serve as review articles and replace the traditional expert opinion or narrative review. This article explains the structure and content of a systematic review.

  8. An introductory guide to uncertainty analysis in environmental and health risk assessment. Environmental Restoration Program

    SciTech Connect

    Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.

    1994-12-01

    This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete.

  9. Uncertainty quantification in coronary blood flow simulations: Impact of geometry, boundary conditions and blood viscosity.

    PubMed

    Sankaran, Sethuraman; Kim, Hyun Jin; Choi, Gilwoo; Taylor, Charles A

    2016-08-16

    Computational fluid dynamic methods are currently being used clinically to simulate blo