Sample records for expanded uncertainties calculated

  1. Expanded uncertainty associated with determination of isotope enrichment factors: Comparison of two point calculation and Rayleigh-plot.

    PubMed

    Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S

    2018-01-01

    The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Method for estimating effects of unknown correlations in spectral irradiance data on uncertainties of spectrally integrated colorimetric quantities

    NASA Astrophysics Data System (ADS)

    Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki

    2017-08-01

    Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.

  3. Ab Initio Calculated Results Require New Formulations for Properties in the Limit of Zero Density: The Viscosity of Methane (CH4)

    NASA Astrophysics Data System (ADS)

    Laesecke, Arno; Muzny, Chris D.

    2017-12-01

    A wide-ranging formulation for the viscosity of methane in the limit of zero density is presented. Using ab initio calculated data of Hellmann et al. (J Chem Phys 129, 064302, 2008) from 80 K to 1500 K, the functional form was developed by guided symbolic regression with the constraints of correct extrapolation to T → 0 and in the high-temperature limit. The formulation was adjusted to the recalibrated experimental data of May et al. (Int J Thermophys 28, 1085-1110, 2007) so that these are represented within their estimated expanded uncertainty of 0.053 % (k = 2) in their temperature range from 210.756 K to 391.551 K. Based on comparisons with original data and recalibrated viscosity ratio measurements, the expanded uncertainty of the new correlation is estimated outside this temperature range to be 0.2 % to 700 K, 0.5 % to 1100 K, 1 % to 1500 K, and physically correct at higher temperatures. At temperatures below 210 K, the new correlation agrees with recalibrated experimental data within 0.3 % down to 150 K. Hellmann et al. estimated the expanded uncertainty of their calculated data at 1 % to 80 K. The new formulation extrapolates without a singularity to T→ 0.

  4. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less

  5. Evaluation strategies and uncertainty calculation of isotope amount ratios measured by MC ICP-MS on the example of Sr.

    PubMed

    Horsky, Monika; Irrgeher, Johanna; Prohaska, Thomas

    2016-01-01

    This paper critically reviews the state-of-the-art of isotope amount ratio measurements by solution-based multi-collector inductively coupled plasma mass spectrometry (MC ICP-MS) and presents guidelines for corresponding data reduction strategies and uncertainty assessments based on the example of n((87)Sr)/n((86)Sr) isotope ratios. This ratio shows variation attributable to natural radiogenic processes and mass-dependent fractionation. The applied calibration strategies can display these differences. In addition, a proper statement of uncertainty of measurement, including all relevant influence quantities, is a metrological prerequisite. A detailed instructive procedure for the calculation of combined uncertainties is presented for Sr isotope amount ratios using three different strategies of correction for instrumental isotopic fractionation (IIF): traditional internal correction, standard-sample bracketing, and a combination of both, using Zr as internal standard. Uncertainties are quantified by means of a Kragten spreadsheet approach, including the consideration of correlations between individual input parameters to the model equation. The resulting uncertainties are compared with uncertainties obtained from the partial derivatives approach and Monte Carlo propagation of distributions. We obtain relative expanded uncertainties (U rel; k = 2) of n((87)Sr)/n((86)Sr) of < 0.03 %, when normalization values are not propagated. A comprehensive propagation, including certified values and the internal normalization ratio in nature, increases relative expanded uncertainties by about factor two and the correction for IIF becomes the major contributor.

  6. Determination of boron in uranium aluminum silicon alloy by spectrophotometry and estimation of expanded uncertainty in measurement

    NASA Astrophysics Data System (ADS)

    Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.

    2008-08-01

    Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.

  7. [Uncertainty evaluation of the determination of toxic equivalent quantity of polychlorinated dibenzo-p-dioxins and dibenzofurans in soil by isotope dilution high resolution gas chromatography and high resolution mass spectrometry].

    PubMed

    Du, Bing; Liu Aimin; Huang, Yeru

    2014-09-01

    Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in soil samples were analyzed by isotope dilution method with high resolution gas chromatography and high resolution mass spectrometry (ID-HRGC/HRMS), and the toxic equivalent quantity (TEQ) were calculated. The impacts of major source of measurement uncertainty are discussed, and the combined relative standard uncertainties were calculated for each 2, 3, 7, 8 substituted con- gener. Furthermore, the concentration, combined uncertainty and expanded uncertainty for TEQ of PCDD/Fs in a soil sample in I-TEF, WHO-1998-TEF and WHO-2005-TEF schemes are provided as an example. I-TEF, WHO-1998-TEF and WHO-2005-TEF are the evaluation schemes of toxic equivalent factor (TEF), and are all currently used to describe 2,3,7,8 sub- stituted relative potencies.

  8. Systematic and statistical uncertainties in simulated r-process abundances due to uncertain nuclear masses

    DOE PAGES

    Surman, Rebecca; Mumpower, Matthew; McLaughlin, Gail

    2017-02-27

    Unknown nuclear masses are a major source of nuclear physics uncertainty for r-process nucleosynthesis calculations. Here we examine the systematic and statistical uncertainties that arise in r-process abundance predictions due to uncertainties in the masses of nuclear species on the neutron-rich side of stability. There is a long history of examining systematic uncertainties by the application of a variety of different mass models to r-process calculations. Here we expand upon such efforts by examining six DFT mass models, where we capture the full impact of each mass model by updating the other nuclear properties — including neutron capture rates, β-decaymore » lifetimes, and β-delayed neutron emission probabilities — that depend on the masses. Unlike systematic effects, statistical uncertainties in the r-process pattern have just begun to be explored. Here we apply a global Monte Carlo approach, starting from the latest FRDM masses and considering random mass variations within the FRDM rms error. Here, we find in each approach that uncertain nuclear masses produce dramatic uncertainties in calculated r-process yields, which can be reduced in upcoming experimental campaigns.« less

  9. Systematic and statistical uncertainties in simulated r-process abundances due to uncertain nuclear masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Surman, Rebecca; Mumpower, Matthew; McLaughlin, Gail

    Unknown nuclear masses are a major source of nuclear physics uncertainty for r-process nucleosynthesis calculations. Here we examine the systematic and statistical uncertainties that arise in r-process abundance predictions due to uncertainties in the masses of nuclear species on the neutron-rich side of stability. There is a long history of examining systematic uncertainties by the application of a variety of different mass models to r-process calculations. Here we expand upon such efforts by examining six DFT mass models, where we capture the full impact of each mass model by updating the other nuclear properties — including neutron capture rates, β-decaymore » lifetimes, and β-delayed neutron emission probabilities — that depend on the masses. Unlike systematic effects, statistical uncertainties in the r-process pattern have just begun to be explored. Here we apply a global Monte Carlo approach, starting from the latest FRDM masses and considering random mass variations within the FRDM rms error. Here, we find in each approach that uncertain nuclear masses produce dramatic uncertainties in calculated r-process yields, which can be reduced in upcoming experimental campaigns.« less

  10. CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Rolf; Paget, Maria L.; Richman, Eric E.

    2011-03-31

    With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less

  11. Uncertainty Measurement for Trace Element Analysis of Uranium and Plutonium Samples by Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallimore, David L.

    2012-06-13

    The measurement uncertainty estimatino associated with trace element analysis of impurities in U and Pu was evaluated using the Guide to the Expression of Uncertainty Measurement (GUM). I this evalution the uncertainty sources were identified and standard uncertainties for the components were categorized as either Type A or B. The combined standard uncertainty was calculated and a coverage factor k = 2 was applied to obtain the expanded uncertainty, U. The ICP-AES and ICP-MS methods used were deveoped for the multi-element analysis of U and Pu samples. A typical analytical run consists of standards, process blanks, samples, matrix spiked samples,more » post digestion spiked samples and independent calibration verification standards. The uncertainty estimation was performed on U and Pu samples that have been analyzed previously as part of the U and Pu Sample Exchange Programs. Control chart results and data from the U and Pu metal exchange programs were combined with the GUM into a concentration dependent estimate of the expanded uncertainty. Comparison of trace element uncertainties obtained using this model was compared to those obtained for trace element results as part of the Exchange programs. This process was completed for all trace elements that were determined to be above the detection limit for the U and Pu samples.« less

  12. The Revised OB-1 Method for Metal-Water Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, Robert Michael; Wright, Richard Q

    The OB-1 method for the calculation of the minimum critical mass (mcm) of fissile actinides in metal/water systems was described in a 2008 Nuclear Science and Engineering (NS&E) article. The purpose of the present work is to update and expand the application of this method with current nuclear data, including data uncertainties. The mcm and the hypothetical fissile metal density ({rho}{sub F}) in grams of metal/liter are obtained by a fit to values predicted with transport calculations. The input parameters required are thermal values for fission and absorption cross sections and nubar. A factor of ({radical}{pi})/2 is used to convertmore » to Maxwellian averaged values. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the mcm, either in percent or grams.« less

  13. Density, refractive index, interfacial tension, and viscosity of ionic liquids [EMIM][EtSO4], [EMIM][NTf2], [EMIM][N(CN)2], and [OMA][NTf2] in dependence on temperature at atmospheric pressure.

    PubMed

    Fröba, Andreas P; Kremer, Heiko; Leipertz, Alfred

    2008-10-02

    The density, refractive index, interfacial tension, and viscosity of ionic liquids (ILs) [EMIM][EtSO 4] (1-ethyl-3-methylimidazolium ethylsulfate), [EMIM][NTf 2] (1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide), [EMIM][N(CN) 2] (1-ethyl-3-methylimidazolium dicyanimide), and [OMA][NTf 2] (trioctylmethylammonium bis(trifluoromethylsulfonyl)imide) were studied in dependence on temperature at atmospheric pressure both by conventional techniques and by surface light scattering (SLS). A vibrating tube densimeter was used for the measurement of density at temperatures from (273.15 to 363.15) K and the results have an expanded uncertainty ( k = 2) of +/-0.02%. Using an Abbe refractometer, the refractive index was measured for temperatures between (283.15 and 313.15) K with an expanded uncertainty ( k = 2) of about +/-0.0005. The interfacial tension was obtained from the pendant drop technique at a temperature of 293.15 K with an expanded uncertainty ( k = 2) of +/-1%. For higher and lower temperatures, the interfacial tension was estimated by an adequate prediction scheme based on the datum at 293.15 K and the temperature dependence of density. For the ILs studied within this work, at a first order approximation, the quantity directly accessible by the SLS technique was the ratio of surface tension to dynamic viscosity. By combining the experimental results of the SLS technique with density and interfacial tension from conventional techniques, the dynamic viscosity could be obtained for temperatures between (273.15 and 333.15) K with an estimated expanded uncertainty ( k = 2) of less than +/-3%. The measured density, refractive index, and viscosity are represented by interpolating expressions with differences between the experimental and calculated values that are comparable with but always smaller than the expanded uncertainties ( k = 2). Besides a comparison with the literature, the influence of structural variations on the thermophysical properties of the ILs is discussed in detail. The viscosities mostly agree with values reported in the literature within the combined estimated expanded uncertainties ( k = 2) of the measurements while our density and interfacial tension data differ by more than +/-1% and +/-5%.

  14. SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations

    PubMed Central

    2013-01-01

    Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408

  15. SUNPLIN: simulation with uncertainty for phylogenetic investigations.

    PubMed

    Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F

    2013-11-15

    Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.

  16. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    PubMed

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  17. Comparison of methods for measuring atmospheric deposition of arsenic, cadmium, nickel and lead.

    PubMed

    Aas, Wenche; Alleman, Laurent Y; Bieber, Elke; Gladtke, Dieter; Houdret, Jean-Luc; Karlsson, Vuokko; Monies, Christian

    2009-06-01

    A comprehensive field intercomparison at four different types of European sites (two rural, one urban and one industrial) comparing three different collectors (wet only, bulk and Bergerhoff samplers) was conducted in the framework of the European Committee for Standardization (CEN) to create an European standard for the deposition of the four elements As, Cd, Ni and Pb. The purpose was to determine whether the proposed methods lead to results within the uncertainty required by the EU's daughter directive (70%). The main conclusion is that a different sampling strategy is needed for rural and industrial sites. Thus, the conclusions on uncertainties and sample approach are presented separately for the different approaches. The wet only and bulk collector ("bulk bottle method") are comparable at wet rural sites where the total deposition arises mainly from precipitation, the expanded uncertainty when comparing these two types of sampler are below 45% for As, Cd and Pb, 67% for Ni. At industrial sites and possibly very dry rural and urban sites it is necessary to use Bergerhoff samplers or a "bulk bottle+funnel method". It is not possible to address the total deposition estimation with these methods, but they will give the lowest estimate of the total deposition. The expanded uncertainties when comparing the Bergerhoff and the bulk bottle+funnel methods are below 50% for As and Cd, and 63% for Pb. The uncertainty for Ni was not addressed since the bulk bottle+funnel method did not include a full digestion procedure which is necessary for sites with high loads of undissolved metals. The lowest estimate can however be calculated by comparing parallel Bergerhoff samplers where the expanded uncertainty for Ni was 24%. The reproducibility is comparable to the between sampler/method uncertainties. Sampling and sample preparation were proved to be the main factors in the uncertainty budget of deposition measurements.

  18. A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. I. Model description and uncertainty analysis.

    PubMed

    Traas, T P; Luttik, R; Jongbloed, R H

    1996-08-01

    In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.

  19. A bottom-up approach in estimating the measurement uncertainty and other important considerations for quantitative analyses in drug testing for horses.

    PubMed

    Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H

    2007-09-07

    Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to derive the combined standard uncertainty. Finally, an expanded uncertainty is calculated at 99% one-tailed confidence level by multiplying the standard uncertainty with an appropriate coverage factor (k). A sample is considered positive if the determined concentration of the threshold substance exceeds its threshold by the expanded uncertainty. In addition, other important considerations, which can have a significant impact on quantitative analyses, will be presented.

  20. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  1. Minimal position-velocity uncertainty wave packets in relativistic and non-relativistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Al-Hashimi, M. H.; Wiese, U.-J.

    2009-12-01

    We consider wave packets of free particles with a general energy-momentum dispersion relation E(p). The spreading of the wave packet is determined by the velocity v=∂pE. The position-velocity uncertainty relation ΔxΔv⩾12|<∂p2E>| is saturated by minimal uncertainty wave packets Φ(p)=Aexp(-αE(p)+βp). In addition to the standard minimal Gaussian wave packets corresponding to the non-relativistic dispersion relation E(p)=p2/2m, analytic calculations are presented for the spreading of wave packets with minimal position-velocity uncertainty product for the lattice dispersion relation E(p)=-cos(pa)/ma2 as well as for the relativistic dispersion relation E(p)=p2+m2. The boost properties of moving relativistic wave packets as well as the propagation of wave packets in an expanding Universe are also discussed.

  2. Combining Nordtest method and bootstrap resampling for measurement uncertainty estimation of hematology analytes in a medical laboratory.

    PubMed

    Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong

    2017-12-01

    Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. A Gas Pressure Scale Based on Primary Standard Piston Gauges

    PubMed Central

    Olson, Douglas A.; Driver, R. Greg; Bowers, Walter J.

    2010-01-01

    The National Institute of Standards and Technology (NIST) has redefined its gas pressure scale, up to 17 MPa, based on two primary standard piston gauges. The primary standard piston gauges are 35.8 mm in diameter and operate from 20 kPa to 1 MPa. Ten secondary standard piston gauges, two each of five series of the Ruska 2465 type, with successively smaller diameters form the scale extending up to 17 MPa. Six of the piston gauges were directly compared to the primary standards to determine their effective area and expanded (k = 2) uncertainty. Two piston gauges operating to 7 MPa were compared to the 1.4 MPa gauges, and two piston gauges operating to 17 MPa were compared to the 7 MPa gauges. Distortion in the 7 MPa piston gauges was determined by comparing those gauges to a DH Instruments PG7601 type piston gauge, whose distortion was calculated using elasticity theory. The relative standard uncertainties achieved by the primary standards range from 3.0 × 10−6 to 3.2 × 10−6. The relative standard uncertainty of the secondary standards is as low as 4.2 × 10−6 at 300 kPa. The effective areas and uncertainties were validated by comparison to standards of other National Metrology Institutes (NMIs). Results show agreement in all cases to better than the expanded (k = 2) uncertainty of the difference between NIST and the other NMIs, and in most cases to better than the standard (k = 1) uncertainty of the difference. PMID:27134793

  4. Is it necessary to plan with safety margins for actively scanned proton therapy?

    NASA Astrophysics Data System (ADS)

    Albertini, F.; Hug, E. B.; Lomax, A. J.

    2011-07-01

    In radiation therapy, a plan is robust if the calculated and the delivered dose are in agreement, even in the case of different uncertainties. The current practice is to use safety margins, expanding the clinical target volume sufficiently enough to account for treatment uncertainties. This, however, might not be ideal for proton therapy and in particular when using intensity modulated proton therapy (IMPT) plans as degradation in the dose conformity could also be found in the middle of the target resulting from misalignments of highly in-field dose gradients. Single field uniform dose (SFUD) and IMPT plans have been calculated for different anatomical sites and the need for margins has been assessed by analyzing plan robustness to set-up and range uncertainties. We found that the use of safety margins is a good way to improve plan robustness for SFUD and IMPT plans with low in-field dose gradients but not necessarily for highly modulated IMPT plans for which only a marginal improvement in plan robustness could be detected through the definition of a planning target volume.

  5. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  6. Collision judgment when viewing minified images through a HMD visual field expander

    NASA Astrophysics Data System (ADS)

    Luo, Gang; Lichtenstein, Lee; Peli, Eli

    2007-02-01

    Purpose: Patients with tunnel vision have great difficulties in mobility. We have developed an augmented vision head mounted device, which can provide patients 5x expanded field by superimposing minified edge images of a wider field captured by a miniature video camera over the natural view seen through the display. In the minified display, objects appear closer to the heading direction than they really are. This might cause users to overestimate collision risks, and therefore to perform unnecessary obstacle-avoidance maneuvers. A study was conducted in a virtual environment to test the impact of minified view on collision judgment. Methods: Simulated scenes were presented to subjects as if they were walking in a shopping mall corridor. Subjects reported whether they would make any contact with stationary obstacles that appeared at variable distances from their walking path. Perceived safe passing distance (PSPD) was calculated by finding the transition point from reports of yes to no. Decision uncertainty was quantified by the sharpness of the transition. Collision envelope (CE) size was calculated by summing up PSPD for left and right sides. Ten normally sighted subjects were tested (1) when not using the device and with one eye patched, and (2) when the see-through view of device was blocked and only minified images were visible. Results: The use of the 5x minification device caused only an 18% increase of CE (13cm, p=0.048). Significant impact of the device on judgment uncertainty was not found (p=0.089). Conclusion: Minification had only a small impact on collision judgment. This supports the use of such a minifying device as an effective field expander for patients with tunnel vision.

  7. Enhancing the Characterization of Epistemic Uncertainties in PM2.5 Risk Analyses.

    PubMed

    Smith, Anne E; Gans, Will

    2015-03-01

    The Environmental Benefits Mapping and Analysis Program (BenMAP) is a software tool developed by the U.S. Environmental Protection Agency (EPA) that is widely used inside and outside of EPA to produce quantitative estimates of public health risks from fine particulate matter (PM2.5 ). This article discusses the purpose and appropriate role of a risk analysis tool to support risk management deliberations, and evaluates the functions of BenMAP in this context. It highlights the importance in quantitative risk analyses of characterization of epistemic uncertainty, or outright lack of knowledge, about the true risk relationships being quantified. This article describes and quantitatively illustrates sensitivities of PM2.5 risk estimates to several key forms of epistemic uncertainty that pervade those calculations: the risk coefficient, shape of the risk function, and the relative toxicity of individual PM2.5 constituents. It also summarizes findings from a review of U.S.-based epidemiological evidence regarding the PM2.5 risk coefficient for mortality from long-term exposure. That review shows that the set of risk coefficients embedded in BenMAP substantially understates the range in the literature. We conclude that BenMAP would more usefully fulfill its role as a risk analysis support tool if its functions were extended to better enable and prompt its users to characterize the epistemic uncertainties in their risk calculations. This requires expanded automatic sensitivity analysis functions and more recognition of the full range of uncertainty in risk coefficients. © 2014 Society for Risk Analysis.

  8. Multicenter Evaluation of Cystatin C Measurement after Assay Standardization.

    PubMed

    Bargnoux, Anne-Sophie; Piéroni, Laurence; Cristol, Jean-Paul; Kuster, Nils; Delanaye, Pierre; Carlier, Marie-Christine; Fellahi, Soraya; Boutten, Anne; Lombard, Christine; González-Antuña, Ana; Delatour, Vincent; Cavalier, Etienne

    2017-04-01

    Since 2010, a certified reference material ERM-DA471/IFCC has been available for cystatin C (CysC). This study aimed to assess the sources of uncertainty in results for clinical samples measured using standardized assays. This evaluation was performed in 2015 and involved 7 clinical laboratories located in France and Belgium. CysC was measured in a panel of 4 serum pools using 8 automated assays and a candidate isotope dilution mass spectrometry reference measurement procedure. Sources of uncertainty (imprecision and bias) were evaluated to calculate the relative expanded combined uncertainty for each CysC assay. Uncertainty was judged against the performance specifications derived from the biological variation model. Only Siemens reagents on the Siemens systems and, to a lesser extent, DiaSys reagents on the Cobas system, provided results that met the minimum performance criterion calculated according to the intraindividual and interindividual biological variations. Although the imprecision was acceptable for almost all assays, an increase in the bias with concentration was observed for Gentian reagents, and unacceptably high biases were observed for Abbott and Roche reagents on their own systems. This comprehensive picture of the market situation since the release of ERM-DA471/IFCC shows that bias remains the major component of the combined uncertainty because of possible problems associated with the implementation of traceability. Although some manufacturers have clearly improved their calibration protocols relative to ERM-DA471, most of them failed to meet the criteria for acceptable CysC measurements. © 2016 American Association for Clinical Chemistry.

  9. Angular filter refractometry analysis using simulated annealing.

    PubMed

    Angland, P; Haberberger, D; Ivancic, S T; Froula, D H

    2017-10-01

    Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.

  10. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report: Updated in 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisterson, Douglas

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. ARM currently provides data and supporting metadata (information about the data or data quality) to its users through several sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, ARM relies on Instrument Mentors and the ARM Data Quality Office to ensure, assess, and report measurement quality. Therefore, anmore » easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. This report is a continuation of the work presented by Campos and Sisterson (2015) and provides additional uncertainty information from instruments not available in their report. As before, a total measurement uncertainty has been calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). This study will not expand on methods for computing these uncertainties. As before, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available to the ARM community through the ARM Instrument Mentors and their ARM instrument handbooks. This study continues the first steps towards reporting ARM measurement uncertainty as: (1) identifying how the uncertainty of individual ARM measurements is currently expressed, (2) identifying a consistent approach to measurement uncertainty, and then (3) reclassifying ARM instrument measurement uncertainties in a common framework.« less

  11. [Evaluation of uncertainty for determination of tin and its compounds in air of workplace by flame atomic absorption spectrometry].

    PubMed

    Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei

    2015-10-01

    To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.

  12. Assessment of the uncertainties in the Radiological Protection Institute of Ireland (RPII) radon measurements service.

    PubMed

    Hanley, O; Gutiérrez-Villanueva, J L; Currivan, L; Pollard, D

    2008-10-01

    The RPII radon (Rn) laboratory holds accreditation for the International Standard ISO/IEC 17025. A requirement of this standard is an estimate of the uncertainty of measurement. This work shows two approaches to estimate the uncertainty. The bottom-up approach involved identifying the components that were found to contribute to the uncertainty. Estimates were made for each of these components, which were combined to give a combined uncertainty of 13.5% at a Rn concentration of approximately 2500 Bq m(-3) at the 68% confidence level. By applying a coverage factor of k=2, the expanded uncertainty is +/-27% at the 95% confidence level. The top-down approach used information previously gathered from intercomparison exercises to estimate the uncertainty. This investigation found an expanded uncertainty of +/-22% at approximately 95% confidence level. This is good agreement for such independent estimates.

  13. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  14. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    NASA Astrophysics Data System (ADS)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  15. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, E; Sisterson, Douglas

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess,more » and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty: 1) identifying how the uncertainty of individual ARM measurements is currently expressed, 2) identifying a consistent approach to measurement uncertainty, and then 3) reclassifying ARM instrument measurement uncertainties in a common framework.« less

  16. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE PAGES

    Angland, P.; Haberberger, D.; Ivancic, S. T.; ...

    2017-10-30

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  17. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angland, P.; Haberberger, D.; Ivancic, S. T.

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  18. Development of a new chlorogenic acid certified reference material for food and drug analysis.

    PubMed

    Yang, Dezhi; Jiao, LingTai; Zhang, Baoxi; Du, Guanhua; Lu, Yang

    2017-06-05

    This paper reports the preparation and characterization of a new chlorogenic acid (CHA) certified reference material (CRM), which is unavailable commercially. CHA is an active ingredient found in many geo-authentic Chinese medicinal materials and developed as an anti-cancer drug. In this work, trace impurities were isolated and identified through various techniques. CHA CRM was quantified with two analytical methods, and their results were in good agreement with each other. The certified value and corresponding expanded uncertainty of CHA CRM reached 99.4%±0.2%, which was calculated by multiplying the combined standard uncertainty by the coverage factor (k=2), at a confidence level of 95%. This CRM can be used to calibrate measurement system, evaluate or validate measurement procedures, assign traceable property values to non-CRMs, and conduct quality control assays. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. SU-E-T-287: Robustness Study of Passive-Scattering Proton Therapy in Lung: Is Range and Setup Uncertainty Calculation On the Initial CT Enough to Predict the Plan Robustness?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Dormer, J; Kenton, O

    Purpose: Plan robustness of the passive-scattering proton therapy treatment of lung tumors has been studied previously using combined uncertainties of 3.5% in CT number and 3 mm geometric shifts. In this study, we investigate whether this method is sufficient to predict proton plan robustness by comparing to plans performed on weekly verification CT scans. Methods: Ten lung cancer patients treated with passive-scattering proton therapy were randomly selected. All plans were prescribed 6660cGy in 37 fractions. Each initial plan was calculated using +/− 3.5% range and +/− 0.3cm setup uncertainty in x, y and z directions in Eclipse TPS(Method-A). Throughout themore » treatment course, patients received weekly verification CT scans to assess the daily treatment variation(Method-B). After contours and imaging registrations are verified by the physician, the initial plan with the same beamline and compensator was mapped into the verification CT. Dose volume histograms (DVH) were evaluated for robustness study. Results: Differences are observed between method A and B in terms of iCTV coverage and lung dose. Method-A shows all the iCTV D95 are within +/− 1% difference, while 20% of cases fall outside +/−1% range in Method-B. In the worst case scenario(WCS), the iCTV D95 is reduced by 2.5%. All lung V5 and V20 are within +/−5% in Method-A while 15% of V5 and 10% of V20 fall outside of +/−5% in Method-B. In the WCS, Lung V5 increased by 15% and V20 increased by 9%. Method A and B show good agreement with regard to cord maximum and Esophagus mean dose. Conclusion: This study suggests that using range and setup uncertainty calculation (+/−3.5% and +/−3mm) may not be sufficient to predict the WCS. In the absence of regular verification scans, expanding the conventional uncertainty parameters(e.g., to +/−3.5% and +/−4mm) may be needed to better reflect plan actual robustness.« less

  20. Impact of uncertainties in inorganic chemical rate constants on tropospheric composition and ozone radiative forcing

    NASA Astrophysics Data System (ADS)

    Newsome, Ben; Evans, Mat

    2017-12-01

    Chemical rate constants determine the composition of the atmosphere and how this composition has changed over time. They are central to our understanding of climate change and air quality degradation. Atmospheric chemistry models, whether online or offline, box, regional or global, use these rate constants. Expert panels evaluate laboratory measurements, making recommendations for the rate constants that should be used. This results in very similar or identical rate constants being used by all models. The inherent uncertainties in these recommendations are, in general, therefore ignored. We explore the impact of these uncertainties on the composition of the troposphere using the GEOS-Chem chemistry transport model. Based on the Jet Propulsion Laboratory (JPL) and International Union of Pure and Applied Chemistry (IUPAC) evaluations we assess the influence of 50 mainly inorganic rate constants and 10 photolysis rates on tropospheric composition through the use of the GEOS-Chem chemistry transport model. We assess the impact on four standard metrics: annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime. Uncertainty in the rate constants for NO2 + OH M HNO3 and O3 + NO → NO2 + O2 are the two largest sources of uncertainty in these metrics. The absolute magnitude of the change in the metrics is similar if rate constants are increased or decreased by their σ values. We investigate two methods of assessing these uncertainties, addition in quadrature and a Monte Carlo approach, and conclude they give similar outcomes. Combining the uncertainties across the 60 reactions gives overall uncertainties on the annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime of 10, 11, 16 and 16 %, respectively. These are larger than the spread between models in recent model intercomparisons. Remote regions such as the tropics, poles and upper troposphere are most uncertain. This chemical uncertainty is sufficiently large to suggest that rate constant uncertainty should be considered alongside other processes when model results disagree with measurement. Calculations for the pre-industrial simulation allow a tropospheric ozone radiative forcing to be calculated of 0.412 ± 0.062 W m-2. This uncertainty (13 %) is comparable to the inter-model spread in ozone radiative forcing found in previous model-model intercomparison studies where the rate constants used in the models are all identical or very similar. Thus, the uncertainty of tropospheric ozone radiative forcing should expanded to include this additional source of uncertainty. These rate constant uncertainties are significant and suggest that refinement of supposedly well-known chemical rate constants should be considered alongside other improvements to enhance our understanding of atmospheric processes.

  1. Is my bottom-up uncertainty estimation on metal measurement adequate?

    NASA Astrophysics Data System (ADS)

    Marques, J. R.; Faustino, M. G.; Monteiro, L. R.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.

    2018-03-01

    Is the estimated uncertainty under GUM recommendation associated with metal measurement adequately estimated? How to evaluate if the measurement uncertainty really covers all uncertainty that is associated with the analytical procedure? Considering that, many laboratories frequently underestimate or less frequently overestimate uncertainties on its results; this paper presents the evaluation of estimated uncertainties on two ICP-OES procedures of seven metal measurements according to GUM approach. Horwitz function and proficiency tests scaled standard uncertainties were used in this evaluation. Our data shows that most elements expanded uncertainties were from two to four times underestimated. Possible causes and corrections are discussed herein.

  2. Two generators to produce SI-traceable reference gas mixtures for reactive compounds at atmospheric levels

    NASA Astrophysics Data System (ADS)

    Pascale, C.; Guillevic, M.; Ackermann, A.; Leuenberger, D.; Niederhauser, B.

    2017-12-01

    To answer the needs of air quality and climate monitoring networks, two new gas generators were developed and manufactured at METAS in order to dynamically generate SI-traceable reference gas mixtures for reactive compounds at atmospheric concentrations. The technical features of the transportable generators allow for the realization of such gas standards for reactive compounds (e.g. NO2, volatile organic compounds) in the nmol · mol-1 range (ReGaS2), and fluorinated gases in the pmol ṡ mol-1 range (ReGaS3). The generation method is based on permeation and dynamic dilution. The transportable generators have multiple individual permeation chambers allowing for the generation of mixtures containing up to five different compounds. This mixture is then diluted using mass flow controllers, thus making the production process adaptable to generate the required amount of substance fraction. All parts of ReGaS2 in contact with the gas mixture are coated to reduce adsorption/desorption processes. Each input parameter required to calculate the generated amount of substance fraction is calibrated with SI-primary standards. The stability and reproducibility of the generated amount of substance fractions were tested with NO2 for ReGaS2 and HFC-125 for ReGaS3. They demonstrate stability over 1-4 d better than 0.4% and 0.8%, respectively, and reproducibility better than 0.7% and 1%, respectively. Finally, the relative expanded uncertainty of the generated amount of substance fraction is smaller than 3% with the major contributions coming from the uncertainty of the permeation rate and/or of the purity of the matrix gas. These relative expanded uncertainties meet then the needs of the data quality objectives fixed by the World Meteorological Organization.

  3. Accurate calibration and uncertainty estimation of the normal spring constant of various AFM cantilevers.

    PubMed

    Song, Yunpeng; Wu, Sen; Xu, Linyan; Fu, Xing

    2015-03-10

    Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM) is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke's law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.

  4. Revealing Risks in Adaptation Planning: expanding Uncertainty Treatment and dealing with Large Projection Ensembles during Planning Scenario development

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.

    2015-12-01

    Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.

  5. Uncertainty of a hybrid surface temperature sensor for silicon wafers and comparison with an embedded thermocouple.

    PubMed

    Iuchi, Tohru; Gogami, Atsushi

    2009-12-01

    We have developed a user-friendly hybrid surface temperature sensor. The uncertainties of temperature readings associated with this sensor and a thermocouple embedded in a silicon wafer are compared. The expanded uncertainties (k=2) of the hybrid temperature sensor and the embedded thermocouple are 2.11 and 2.37 K, respectively, in the temperature range between 600 and 1000 K. In the present paper, the uncertainty evaluation and the sources of uncertainty are described.

  6. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  7. Spectral Radiance of a Large-Area Integrating Sphere Source

    PubMed Central

    Walker, James H.; Thompson, Ambler

    1995-01-01

    The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725

  8. A likelihood method for measuring the ultrahigh energy cosmic ray composition

    NASA Astrophysics Data System (ADS)

    High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Belov, K.; Blake, S. A.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2006-08-01

    Air fluorescence detectors traditionally determine the dominant chemical composition of the ultrahigh energy cosmic ray flux by comparing the averaged slant depth of the shower maximum, Xmax, as a function of energy to the slant depths expected for various hypothesized primaries. In this paper, we present a method to make a direct measurement of the expected mean number of protons and iron by comparing the shapes of the expected Xmax distributions to the distribution for data. The advantages of this method includes the use of information of the full distribution and its ability to calculate a flux for various cosmic ray compositions. The same method can be expanded to marginalize uncertainties due to choice of spectra, hadronic models and atmospheric parameters. We demonstrate the technique with independent simulated data samples from a parent sample of protons and iron. We accurately predict the number of protons and iron in the parent sample and show that the uncertainties are meaningful.

  9. Development of a primary standard for absorbed dose from unsealed radionuclide solutions

    NASA Astrophysics Data System (ADS)

    Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.

    2016-12-01

    Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k  =  1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k  =  2).

  10. A dominance-based approach to map risks of ecological invasions in the presence of severe uncertainty

    Treesearch

    Denys Yemshanov; Frank H. Koch; D. Barry Lyons; Mark Ducey; Klaus Koehler

    2012-01-01

    Aim Uncertainty has been widely recognized as one of the most critical issues in predicting the expansion of ecological invasions. The uncertainty associated with the introduction and spread of invasive organisms influences how pest management decision makers respond to expanding incursions. We present a model-based approach to map risk of ecological invasions that...

  11. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-08

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  12. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  13. Numerical characterization under uncertainties of a piston expander for exhaust heat recovery on heavy commercial vehicles

    NASA Astrophysics Data System (ADS)

    Congedo, P. M.; Melis, J.; Daccord, R.

    2017-03-01

    While nearly 30 percent of the fuel energy is lost as waste heat in the form of hot exhaust gases, exhaust heat recovery promises one of the biggest fuel economy potential regarding the technologies available in the next decade. Applied to heavy commercial vehicles (HCVs), buses or off road vehicles, a bottoming Rankine Cycle (RC) on exhaust heat shows a great potential in recovering the exhaust gases energy, even for part loads. The objective of this paper is to illustrates the interest in assessing the uncertainty of this kind of systems for getting a robust prediction of the associated numerical model. In particular, the focus here is on the simulation of a piston expander for exhaust heat recovery. Uncertainties associated to the experimental measurements are propagated through the numerical code by means of uncertainty quantification techniques. Several sources of uncertainties are taken into account at the same time, thus yielding various indications concerning the most predominant parameters, and their influence on several quantities of interest, such as the mechanical power, the mass flow and the exhaust temperature.

  14. Relativistic and QED Effects in the Fundamental Vibration of T2

    NASA Astrophysics Data System (ADS)

    Trivikram, T. Madhu; Schlösser, M.; Ubachs, W.; Salumbides, E. J.

    2018-04-01

    The hydrogen molecule has become a test ground for quantum electrodynamical calculations in molecules. Expanding beyond studies on stable hydrogenic species to the heavier radioactive tritium-bearing molecules, we report on a measurement of the fundamental T2 vibrational splitting (v =0 →1 ) for J =0 - 5 rotational levels. Precision frequency metrology is performed with high-resolution coherent anti-Stokes Raman spectroscopy at an experimental uncertainty of 10-12 MHz, where sub-Doppler saturation features are exploited for the strongest transition. The achieved accuracy corresponds to a 50-fold improvement over a previous measurement, and it allows for the extraction of relativistic and QED contributions to T2 transition energies.

  15. Statistically advanced, self-similar, radial probability density functions of atmospheric and under-expanded hydrogen jets

    NASA Astrophysics Data System (ADS)

    Ruggles, Adam J.

    2015-11-01

    This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent agreement. This was attributed to the high quality of the measurements which reduced the width of the correctly identified, noise-affected pure air distribution, with respect to the turbulent mixing distribution. The ignitability of the atmospheric jet is determined using the flammability factor calculated from both kernel density estimated (KDE) PDFs and PDFs generated using the newly proposed model. Agreement between contours from both approaches is excellent. Ignitability of the under-expanded jet is also calculated using KDE PDFs. Contours are compared with those calculated by applying the atmospheric model to the under-expanded jet. Once again, agreement is excellent. This work demonstrates that self-similar scalar mixing statistics and ignitability of atmospheric jets can be accurately described by the proposed model. This description can be applied with confidence to under-expanded jets, which are more realistic of leak and fuel injection scenarios.

  16. Development of a Certified Reference Material (NMIJ CRM 7203-a) for Elemental Analysis of Tap Water.

    PubMed

    Zhu, Yanbei; Narukawa, Tomohiro; Inagaki, Kazumi; Miyashita, Shin-Ichi; Kuroiwa, Takayoshi; Ariga, Tomoko; Kudo, Izumi; Koguchi, Masae; Heo, Sung Woo; Suh, Jung Ki; Lee, Kyoung-Seok; Yim, Yong-Hyeon; Lim, Youngran

    2017-01-01

    A certified reference material (CRM), NMIJ CRM 7203-a, was developed for the elemental analysis of tap water. At least two independent analytical methods were applied to characterize the certified value of each element. The elements certified in the present CRM were as follows: Al, As, B, Ca, Cd, Cr, Cu, Fe, K, Mg, Mn, Mo, Na, Ni, Pb, Rb, Sb, Se, Sr, and Zn. The certified value for each element was given as the (property value ± expanded uncertainty), with a coverage factor of 2 for the expanded uncertainty. The expanded uncertainties were estimated while considering the contribution of the analytical methods, the method-to-method variance, the sample homogeneity, the long-term stability, and the concentrations of the standard solutions for calibration. The concentration of Hg (0.39 μg kg -1 ) was given as the information value, since loss of Hg was observed when the sample was stored at room temperature and exposed to light. The certified values of selected elements were confirmed by a co-analysis carried out independently by the NMIJ (Japan) and the KRISS (Korea).

  17. Uncertainty of Passive Imager Cloud Optical Property Retrievals to Instrument Radiometry and Model Assumptions: Examples from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Meyer, Kerry; Amarasinghe, Nandana; Arnold, G. Thomas; Zhang, Zhibo; King, Michael D.

    2013-01-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global-daily 1 km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VISNIR channel paired with a 1.6, 2.1, and 3.7 m spectral channel. The MOD06 forward model is derived from on a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1 aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. I n Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 m band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 m, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  18. Uncertainty of passive imager cloud retrievals to instrument radiometry and model assumptions: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Amarasinghe, N.; Arnold, G. T.; Zhang, Z.; Meyer, K.; King, M. D.

    2013-12-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VIS/NIR channel paired with a 1.6, 2.1, and 3.7 μm spectral channel. The MOD06 forward model is derived from a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1° aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. In Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 μm band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 μm, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  19. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  20. The effect of sodium bicarbonate and validation of beckman coulter AU680 analyzers for measuring total carbon dioxide (TCO2) concentrations in horse serum.

    PubMed

    Dirikolu, Levent; Waller, Pamela; Waguespack, Mona Landry; Andrews, Frank Michael; Keowen, Michael Layne; Gaunt, Stephen David

    2017-11-01

    This study evaluated the usage of Beckman Coulter AU680 analyzers for measurement of TCO 2 in horse serum, and the effect of sodium bicarbonate administrations on serum TCO 2 levels in resting horses. Treatment of horses with sodium bicarbonate did not result in any adverse events. Mean TCO 2 concentration was significantly higher from 1 to 8 h in the sodium bicarbonate-treated horses compared to the untreated controls. Within an hour, administration of sodium bicarbonate increased the TCO 2 level from 31.5 ± -2.5 (SD) to 34.0 ± 2.65 (SD) mmol/L and at 2-8 h post-administration, the TCO 2 level was above the 36 mmol/L cut-off level. In all quality control analysis of Australian standard by Beckman Coulter AU680 analyzer, the instrument slightly over estimated the TCO 2 level but the values were in close agreement with mean TCO 2 level being 38.03 with ± 0.87 mmol/L (SD). Expanded uncertainty was calculated using different levels of confidence interval. Based on 99.5% confidence interval using 0.805% expanded uncertainty using mean measured concentration of 38.05 mmol/L, it was estimated that any race samples TCO 2 level higher than 38.5 mmol/L will be indicative of sodium bicarbonate administration using Beckman Coulter AU680 analyzer in Louisiana.

  1. A new thorium-229 reference material

    DOE PAGES

    Essex, Richard M.; Mann, Jaqueline L.; Williams, Ross W.; ...

    2017-07-27

    A new reference material was characterized for 229Th molality and thorium isotope amount ratios. This reference material is intended for use in nuclear forensic analyses as an isotope dilution mass spectrometry spike. The reference material value and expanded uncertainty (k = 2) for the 229Th molality is (1.1498 ± 0.0016)×10 -10 mol g -1 solution. The value and expanded uncertainty (k = 2) for the n( 230Th)/n( 229Th) ratio is (5.18 ± 0.26)×10 -5 and the n( 232Th)/n( 229Th) ratio is (3.815 ± 0.092)×10 -4.

  2. Implementation of the qualities of radiodiagnostic: mammography

    NASA Astrophysics Data System (ADS)

    Pacífico, L. C.; Magalhães, L. A. G.; Peixoto, J. G. P.; Fernandes, E.

    2018-03-01

    The objective of the present study was to evaluate the expanded uncertainty of the mammographic calibration process and present the result of the internal audit performed at the Laboratory of Radiological Sciences (LCR). The qualities of the mammographic beans that are references in the LCR, comprises two irradiation conditions: no-attenuated beam and attenuated beam. Both had satisfactory results, with an expanded uncertainty equals 2,1%. The internal audit was performed, and the degree of accordance with the ISO/IEC 17025 was evaluated. The result of the internal audit was satisfactory. We conclude that LCR can perform calibrations on mammography qualities for end users.

  3. A new thorium-229 reference material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essex, Richard M.; Mann, Jaqueline L.; Williams, Ross W.

    A new reference material was characterized for 229Th molality and thorium isotope amount ratios. This reference material is intended for use in nuclear forensic analyses as an isotope dilution mass spectrometry spike. The reference material value and expanded uncertainty (k = 2) for the 229Th molality is (1.1498 ± 0.0016)×10 -10 mol g -1 solution. The value and expanded uncertainty (k = 2) for the n( 230Th)/n( 229Th) ratio is (5.18 ± 0.26)×10 -5 and the n( 232Th)/n( 229Th) ratio is (3.815 ± 0.092)×10 -4.

  4. Uncertainty evaluation of dead zone of diagnostic ultrasound equipment

    NASA Astrophysics Data System (ADS)

    Souza, R. M.; Alvarenga, A. V.; Braz, D. S.; Petrella, L. I.; Costa-Felix, R. P. B.

    2016-07-01

    This paper presents a model for evaluating measurement uncertainty of a feature used in the assessment of ultrasound images: dead zone. The dead zone was measured by two technicians of the INMETRO's Laboratory of Ultrasound using a phantom and following the standard IEC/TS 61390. The uncertainty model was proposed based on the Guide to the Expression of Uncertainty in Measurement. For the tested equipment, results indicate a dead zone of 1.01 mm, and based on the proposed model, the expanded uncertainty was 0.17 mm. The proposed uncertainty model contributes as a novel way for metrological evaluation of diagnostic imaging by ultrasound.

  5. Accurate Calibration and Uncertainty Estimation of the Normal Spring Constant of Various AFM Cantilevers

    PubMed Central

    Song, Yunpeng; Wu, Sen; Xu, Linyan; Fu, Xing

    2015-01-01

    Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM) is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%. PMID:25763650

  6. Investigation of mass dependence effects for the accurate determination of molybdenum isotope amount ratios by MC-ICP-MS using synthetic isotope mixtures.

    PubMed

    Malinovsky, Dmitry; Dunn, Philip J H; Petrov, Panayot; Goenaga-Infante, Heidi

    2015-01-01

    Methodology for absolute Mo isotope amount ratio measurements by multicollector inductively coupled plasma-mass spectrometry (MC-ICP-MS) using calibration with synthetic isotope mixtures (SIMs) is presented. For the first time, synthetic isotope mixtures prepared from seven commercially available isotopically enriched molybdenum metal powders ((92)Mo, (94)Mo, (95)Mo, (96)Mo, (97)Mo, (98)Mo, and (100)Mo) are used to investigate whether instrumental mass discrimination of Mo isotopes in MC-ICP-MS is consistent with mass-dependent isotope distribution. The parent materials were dissolved and mixed as solutions to obtain mixtures with accurately known isotope amount ratios. The level of elemental impurities in the isotopically enriched molybdenum metal powders was quantified by ICP-MS by using both high-resolution and reaction cell instruments to completely resolve spectral interferences. The Mo isotope amount ratio values with expanded uncertainty (k = 2), determined by MC-ICP-MS for a high-purity Mo rod from Johnson Matthey, were as follows: (92)Mo/(95)Mo = 0.9235(9), (94)Mo/(95)Mo = 0.5785(8), (96)Mo/(95)Mo = 1.0503(9), (97)Mo/(95)Mo = 0.6033(6), (98)Mo/(95)Mo = 1.5291(20), and (100)Mo/(95)Mo = 0.6130(7). A full uncertainty budget for the measurements is presented which shows that the largest contribution to the uncertainty budget comes from correction for elemental impurities (∼51%), followed by the contribution from weighing operations (∼26 %). The atomic weight of molybdenum was calculated to be 95.947(2); the uncertainty in parentheses is expanded uncertainty with the coverage factor of 2. A particular advantage of the developed method is that calibration factors for all six Mo isotope amount ratios, involving the (95)Mo isotope, were experimentally determined. This allows avoiding any assumption on mass-dependent isotope fractions in MC-ICP-MS, inherent to the method of double spike previously used for Mo isotope amount ratio measurements. However, data obtained in this study show that instrumental mass discrimination in MC-ICP-MS is consistent with mass-dependent Mo isotope fractionation. This was demonstrated by a good agreement between experimentally obtained and theoretically expected values of the exponent of isotope fractionation, β, for each triad of Mo isotopes.

  7. Uncertainty Quantification of Evapotranspiration and Infiltration from Modeling and Historic Time Series at the Savannah River F-Area

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Flach, G. P.

    2012-12-01

    The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.

  8. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  9. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  10. Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.

    PubMed

    Meyer, Veronika R

    2003-09-01

    Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.

  11. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  12. Bridging groundwater models and decision support with a Bayesian network

    USGS Publications Warehouse

    Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert

    2013-01-01

    Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.

  13. Adaptive Governance, Uncertainty, and Risk: Policy Framing and Responses to Climate Change, Drought, and Flood.

    PubMed

    Hurlbert, Margot; Gupta, Joyeeta

    2016-02-01

    As climate change impacts result in more extreme events (such as droughts and floods), the need to understand which policies facilitate effective climate change adaptation becomes crucial. Hence, this article answers the question: How do governments and policymakers frame policy in relation to climate change, droughts, and floods and what governance structures facilitate adaptation? This research interrogates and analyzes through content analysis, supplemented by semi-structured qualitative interviews, the policy response to climate change, drought, and flood in relation to agricultural producers in four case studies in river basins in Chile, Argentina, and Canada. First, an epistemological explanation of risk and uncertainty underscores a brief literature review of adaptive governance, followed by policy framing in relation to risk and uncertainty, and an analytical model is developed. Pertinent findings of the four cases are recounted, followed by a comparative analysis. In conclusion, recommendations are made to improve policies and expand adaptive governance to better account for uncertainty and risk. This article is innovative in that it proposes an expanded model of adaptive governance in relation to "risk" that can help bridge the barrier of uncertainty in science and policy. © 2015 Society for Risk Analysis.

  14. Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations

    NASA Astrophysics Data System (ADS)

    Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray

    2017-09-01

    The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.

  15. The contribution of lot-to-lot variation to the measurement uncertainty of an LC-MS-based multi-mycotoxin assay.

    PubMed

    Stadler, David; Sulyok, Michael; Schuhmacher, Rainer; Berthiller, Franz; Krska, Rudolf

    2018-05-01

    Multi-mycotoxin determination by LC-MS is commonly based on external solvent-based or matrix-matched calibration and, if necessary, the correction for the method bias. In everyday practice, the method bias (expressed as apparent recovery RA), which may be caused by losses during the recovery process and/or signal/suppression enhancement, is evaluated by replicate analysis of a single spiked lot of a matrix. However, RA may vary for different lots of the same matrix, i.e., lot-to-lot variation, which can result in a higher relative expanded measurement uncertainty (U r ). We applied a straightforward procedure for the calculation of U r from the within-laboratory reproducibility, which is also called intermediate precision, and the uncertainty of RA (u r,RA ). To estimate the contribution of the lot-to-lot variation to U r , the measurement results of one replicate of seven different lots of figs and maize and seven replicates of a single lot of these matrices, respectively, were used to calculate U r . The lot-to-lot variation was contributing to u r,RA and thus to U r for the majority of the 66 evaluated analytes in both figs and maize. The major contributions of the lot-to-lot variation to u r,RA were differences in analyte recovery in figs and relative matrix effects in maize. U r was estimated from long-term participation in proficiency test schemes with 58%. Provided proper validation, a fit-for-purpose U r of 50% was proposed for measurement results obtained by an LC-MS-based multi-mycotoxin assay, independent of the concentration of the analytes.

  16. [Investigation on the homogeneity and stability of quality controlling cosmetic samples containing arsenic].

    PubMed

    Dong, Bing; Song, Yu; Fan, Wenjia; Zhu, Ying

    2010-11-01

    To study the homogeneity and stability of arsenic in quality controlling cosmetic samples. Arsenic was determined by atomic fluorescence spectrophotometric method. The t-test and F-test were used to evaluate the significant difference of the within-bottle and between-bottle results with three batches. The RSDs of arsenic obtained in different time were compared with the relative expanding uncertainties to evaluate the stability. Average and variance of within-bottle and between-bottle results of arsenic were not different significantly. The RSDs of Arsenic were less than the relative expanding uncertainties. Quality controlling cosmetic samples containing arsenic were considered homogeneous and stable.

  17. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  18. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  19. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  20. What is the acceptable hemolysis index for the measurements of plasma potassium, LDH and AST?

    PubMed

    Rousseau, Nathalie; Pige, Raphaëlle; Cohen, Richard; Pecquet, Matthieu

    2016-06-01

    Hemolysis is a cause of variability in test results for plasma potassium, LDH and AST and is a non-negligible part of measurement uncertainty. However, allowable levels of hemolysis provided by reagent suppliers take neither analytical variability (trueness and precision) nor the measurand into account. Using a calibration range of hemolysis, we measured the plasma concentrations of potassium, LDH and AST, and hemolysis indices with a Cobas C501 analyzer (Roche Diagnostics(®), Meylan, France). Based on the allowable total error (according to Ricós et al.) and the expanded measurement uncertainty equation we calculated the maximum allowable bias for two concentrations of each measurand. Finally, we determined the allowable hemolysis indices for all three measurands. We observed a linear relationship between the observed increases of concentration and hemolysis indices. The LDH measurement was the most sensitive to hemolysis, followed by AST and potassium measurements. The determination of the allowable hemolysis index depends on the targeted measurand, its concentration and the chosen level of requirement of allowable total error.

  1. NPSS Multidisciplinary Integration and Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel

    2006-01-01

    The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.

  2. Quantification of uncertainty in first-principles predicted mechanical properties of solids: Application to solid ion conductors

    NASA Astrophysics Data System (ADS)

    Ahmad, Zeeshan; Viswanathan, Venkatasubramanian

    2016-08-01

    Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.

  3. Calculating Measurement Uncertainty of the “Conventional Value of the Result of Weighing in Air”

    DOE PAGES

    Flicker, Celia J.; Tran, Hy D.

    2016-04-02

    The conventional value of the result of weighing in air is frequently used in commercial calibrations of balances. The guidance in OIML D-028 for reporting uncertainty of the conventional value is too terse. When calibrating mass standards at low measurement uncertainties, it is necessary to perform a buoyancy correction before reporting the result. When calculating the conventional result after calibrating true mass, the uncertainty due to calculating the conventional result is correlated with the buoyancy correction. We show through Monte Carlo simulations that the measurement uncertainty of the conventional result is less than the measurement uncertainty when reporting true mass.more » The Monte Carlo simulation tool is available in the online version of this article.« less

  4. Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, J E; Fratoni, M; Kramer, K J

    A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less

  5. Impact of nuclear data uncertainty on safety calculations for spent nuclear fuel geological disposal

    NASA Astrophysics Data System (ADS)

    Herrero, J. J.; Rochman, D.; Leray, O.; Vasiliev, A.; Pecchia, M.; Ferroukhi, H.; Caruso, S.

    2017-09-01

    In the design of a spent nuclear fuel disposal system, one necessary condition is to show that the configuration remains subcritical at time of emplacement but also during long periods covering up to 1,000,000 years. In the context of criticality safety applying burn-up credit, k-eff eigenvalue calculations are affected by nuclear data uncertainty mainly in the burnup calculations simulating reactor operation and in the criticality calculation for the disposal canister loaded with the spent fuel assemblies. The impact of nuclear data uncertainty should be included in the k-eff value estimation to enforce safety. Estimations of the uncertainty in the discharge compositions from the CASMO5 burn-up calculation phase are employed in the final MCNP6 criticality computations for the intact canister configuration; in between, SERPENT2 is employed to get the spent fuel composition along the decay periods. In this paper, nuclear data uncertainty was propagated by Monte Carlo sampling in the burn-up, decay and criticality calculation phases and representative values for fuel operated in a Swiss PWR plant will be presented as an estimation of its impact.

  6. Adaptive Missile Flight Control for Complex Aerodynamic Phenomena

    DTIC Science & Technology

    2017-08-09

    at high maneuvering conditions motivate guidance approaches that can accommodate uncertainty. Flight control algorithms are one component...performance, but system uncertainty is not directly addressed. Linear, parameter-varying37,38 approaches for munitions expand on optimal control by... post -canard stall. We propose to model these complex aerodynamic mechanisms and use these models in formulating flight controllers within the

  7. Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology.

    PubMed

    Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S

    2014-12-02

    Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Report on the CCT Supplementary Comparison S1 of Infrared Spectral Normal Emittance/Emissivity

    PubMed Central

    Hanssen, Leonard; Wilthan, B.; Monte, Christian; Hollandt, Jörg; Hameury, Jacques; Filtz, Jean-Remy; Girard, Ferruccio; Battuello, Mauro; Ishii, Juntaro

    2016-01-01

    The National Measurement Institutes (NMIs) of the United States, Germany, France, Italy and Japan, have joined in an inter-laboratory comparison of their infrared spectral emittance scales. This action is part of a series of supplementary inter-laboratory comparisons (including thermal conductivity and thermal diffusivity) sponsored by the Consultative Committee on Thermometry (CCT) Task Group on Thermophysical Quantities (TG-ThQ). The objective of this collaborative work is to strengthen the major operative National Measurement Institutes’ infrared spectral emittance scales and consequently the consistency of radiative properties measurements carried out worldwide. The comparison has been performed over a spectral range of 2 μm to 14 μm, and a temperature range from 23 °C to 800 °C. Artefacts included in the comparison are potential standards: oxidized inconel, boron nitride, and silicon carbide. The measurement instrumentation and techniques used for emittance scales are unique for each NMI, including the temperature ranges covered as well as the artefact sizes required. For example, all three common types of spectral instruments are represented: dispersive grating monochromator, Fourier transform and filter-based spectrometers. More than 2000 data points (combinations of material, wavelength and temperature) were compared. Ninety-eight percent (98%) of the data points were in agreement, with differences to weighted mean values less than the expanded uncertainties calculated from the individual NMI uncertainties and uncertainties related to the comparison process. PMID:28239193

  9. Uncertainty in the delayed neutron fraction in fuel assembly depletion calculations

    NASA Astrophysics Data System (ADS)

    Aures, Alexander; Bostelmann, Friederike; Kodeli, Ivan A.; Velkov, Kiril; Zwermann, Winfried

    2017-09-01

    This study presents uncertainty and sensitivity analyses of the delayed neutron fraction of light water reactor and sodium-cooled fast reactor fuel assemblies. For these analyses, the sampling-based XSUSA methodology is used to propagate cross section uncertainties in neutron transport and depletion calculations. Cross section data is varied according to the SCALE 6.1 covariance library. Since this library includes nu-bar uncertainties only for the total values, it has been supplemented by delayed nu-bar uncertainties from the covariance data of the JENDL-4.0 nuclear data library. The neutron transport and depletion calculations are performed with the TRITON/NEWT sequence of the SCALE 6.1 package. The evolution of the delayed neutron fraction uncertainty over burn-up is analysed without and with the consideration of delayed nu-bar uncertainties. Moreover, the main contributors to the result uncertainty are determined. In all cases, the delayed nu-bar uncertainties increase the delayed neutron fraction uncertainty. Depending on the fuel composition, the delayed nu-bar values of uranium and plutonium in fact give the main contributions to the delayed neutron fraction uncertainty for the LWR fuel assemblies. For the SFR case, the uncertainty of the scattering cross section of U-238 is the main contributor.

  10. Ab Initio Values of the Thermophysical Properties of Helium as Standards

    PubMed Central

    Hurly, John J.; Moldover, Michael R.

    2000-01-01

    Recent quantum mechanical calculations of the interaction energy of pairs of helium atoms are accurate and some include reliable estimates of their uncertainty. We combined these ab initio results with earlier published results to obtain a helium-helium interatomic potential that includes relativistic retardation effects over all ranges of interaction. From this potential, we calculated the thermophysical properties of helium, i.e., the second virial coefficients, the dilute-gas viscosities, and the dilute-gas thermal conductivities of 3He, 4He, and their equimolar mixture from 1 K to 104 K. We also calculated the diffusion and thermal diffusion coefficients of mixtures of 3He and 4He. For the pure fluids, the uncertainties of the calculated values are dominated by the uncertainties of the potential; for the mixtures, the uncertainties of the transport properties also include contributions from approximations in the transport theory. In all cases, the uncertainties are smaller than the corresponding experimental uncertainties; therefore, we recommend the ab initio results be used as standards for calibrating instruments relying on these thermophysical properties. We present the calculated thermophysical properties in easy-to-use tabular form. PMID:27551630

  11. Determination of radionuclides in environmental test items at CPHR: traceability and uncertainty calculation.

    PubMed

    Carrazana González, J; Fernández, I M; Capote Ferrera, E; Rodríguez Castro, G

    2008-11-01

    Information about how the laboratory of Centro de Protección e Higiene de las Radiaciones (CPHR), Cuba establishes its traceability to the International System of Units for the measurement of radionuclides in environmental test items is presented. A comparison among different methodologies of uncertainty calculation, including an analysis of the feasibility of using the Kragten-spreadsheet approach, is shown. In the specific case of the gamma spectrometric assay, the influence of each parameter, and the identification of the major contributor, in the relative difference between the methods of uncertainty calculation (Kragten and partial derivative) is described. The reliability of the uncertainty calculation results reported by the commercial software Gamma 2000 from Silena is analyzed.

  12. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  13. Evaluating measurement uncertainty in fluid phase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    van der Veen, Adriaan M. H.

    2018-04-01

    The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.

  14. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  15. Theoretical Grounds for the Propagation of Uncertainties in Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Saracco, Paolo; Pia, Maria Grazia; Batic, Matej

    2014-04-01

    We introduce a theoretical framework for the calculation of uncertainties affecting observables produced by Monte Carlo particle transport, which derive from uncertainties in physical parameters input into simulation. The theoretical developments are complemented by a heuristic application, which illustrates the method of calculation in a streamlined simulation environment.

  16. On-Line Mu Method for Robust Flutter Prediction in Expanding a Safe Flight Envelope for an Aircraft Model Under Flight Test

    NASA Technical Reports Server (NTRS)

    Lind, Richard C. (Inventor); Brenner, Martin J.

    2001-01-01

    A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.

  17. Measurement uncertainty of the EU methods for microbiological examination of red meat.

    PubMed

    Corry, Janet E L; Hedges, Alan J; Jarvis, Basil

    2007-09-01

    Three parallel trials were made of EU methods proposed for the microbiological examination of red meat using two analysts in each of seven laboratories within the UK. The methods involved determination of aerobic colony count (ACC) and Enterobacteriaceae colony count (ECC) using simulated methods and a freeze-dried standardised culture preparation. Trial A was based on a simulated swab test, Trial B a simulated meat excision test and Trial C was a reference test on reconstituted inoculum. Statistical analysis (ANOVA) was carried out before and after rejection of outlying data. Expanded uncertainty values (relative standard deviation x2) for repeatability and reproducibility, based on the log10 cfu/ml, on the ACC ranged from +/-2.1% to +/-2.7% and from +/-5.5% to +/-10.5%, respectively, depending upon the test procedure. Similarly for the ECC, expanded uncertainty estimates for repeatability and reproducibility ranged from +/-4.6% to +/-16.9% and from +/-21.6% to +/-23.5%, respectively. The results are discussed in relation to the potential application of the methods.

  18. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  19. CCQM-K126: low polarity organic in water: carbamazepine in surface water

    NASA Astrophysics Data System (ADS)

    Wai-mei Sin, Della; Wong, Yiu-chung; Lehmann, Andreas; Schneider, Rudolf J.; Kakoulides, Elias; Tang Lin, Teo; Qinde, Liu; Cabillic, Julie; Lardy-fontan, Sophie; Nammoonnoy, Jintana; Prevoo-Franzsen, Désirée; López, Eduardo Emilio; Alberti, Cecilia; Su, Fuhai

    2017-01-01

    The key comparison CCQM-K126 low polarity organic in water: carbamazepine in surface water was coordinated by Government Laboratory Hong Kong under the auspices of the Organic Analysis Working Group (OAWG) of the Comité Consultatif pour la Quantité de Matière (CCQM). Eight National Metrology institutes or Designated Institutes participated and participants were requested to report the mass fraction of carbamazepine in surface water study material. The surface water sample was collected in Hong Kong and was gravimetrically spiked with standard solution. This study provided the means for assessing measurement capabilities for determination of low molecular weight analytes (mass range 100-500) and low polarity (pKOW<= -2) in aqueous matrix. Nine NMIs/DIs registered for the KC and one withdrew before results were submitted. Nine results were submitted by the eight participants. Eight results applied the LC-MS/MS method and Isotope Dilution Mass Spectrometry approach for quantification. BAM additionally submitted a result from ELISA that was not included in the key comparison reference values (KCRV) calculation as is provided in the report for information. One further result was not included as the participant withdrew their result from the calculation after further analysis. The assigned KCRV was the median of the seven results and was assigned a KCRV of 250.2 ng/kg with a combined standard uncertainty of 3.6 ng/kg, The k-factor for the estimation of the expanded uncertainty of the KCRVs was chosen as k = 2. The degree of equivalence (with the KCRV) and its uncertainty was calculated for each result. Seven of the participants were able to demonstrate the ability to quantitatively determine low-polarity analyte in aqueous matrix by applying LC-MS/MS technique at a very low level. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  20. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radulescu, Georgeta; Gauld, Ian C; Ilas, Germina

    2011-01-01

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address themore » issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias and uncertainty results based on a quality-assurance-controlled prerelease version of the Scale 6.1 code package and the ENDF/B-VII nuclear cross section data.« less

  1. Uncertainties in predicting solar panel power output

    NASA Technical Reports Server (NTRS)

    Anspaugh, B.

    1974-01-01

    The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.

  2. Key comparison CCPR-K1.a as an interlaboratory comparison of correlated color temperature

    NASA Astrophysics Data System (ADS)

    Kärhä, P.; Vaskuri, A.; Pulli, T.; Ikonen, E.

    2018-02-01

    We analyze the results of spectral irradiance key comparison CCPR-K1.a for correlated color temperature (CCT). For four participants out of 13, the uncertainties of CCT, calculated using traditional methods, not accounting for correlations, would be too small. The reason for the failure of traditional uncertainty calculation is spectral correlations, producing systematic deviations of the same sign over certain wavelength regions. The results highlight the importance of accounting for such correlations when calculating uncertainties of spectrally integrated quantities.

  3. Examples of measurement uncertainty evaluations in accordance with the revised GUM

    NASA Astrophysics Data System (ADS)

    Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.

    2016-11-01

    The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.

  4. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE PAGES

    Ilas, Germina; Liljenfeldt, Henrik

    2017-05-19

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  5. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Liljenfeldt, Henrik

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  6. Cross section measurements for neutron inelastic scattering and the ( n ,   2 n γ ) reaction on Pb 206

    DOE PAGES

    Negret, A.; Mihailescu, L. C.; Borcea, C.; ...

    2015-06-30

    We measured excitation functions for γ production associated with the neutron inelastic scattering and the (n, 2n) reactions on 206Pb from threshold up to 18 MeV for about 40 transitions. Two independent measurements were performed using different samples and acquisition systems to check consistency of the results. Moreover, the neutron flux was determined with a 235U fission chamber and a procedure that were validated against a fluence standard. For incident energy higher than the threshold for the first excited level and up to 3.5 MeV, estimates are provided for the total inelastic and level cross sections by combining the presentmore » γ production cross sections with the level and decay data of 206Pb reported in the literature. The uncertainty common to all incident energies is 3.0% allowing overall uncertainties from 3.3% to 30% depending on transition and neutron energy. Finally, the present data agree well with earlier work, but significantly expand the experimental database while comparisons with model calculations using the talys reaction code show good agreement over the full energy range.« less

  7. Flowing-water optical power meter for primary-standard, multi-kilowatt laser power measurements

    NASA Astrophysics Data System (ADS)

    Williams, P. A.; Hadler, J. A.; Cromer, C.; West, J.; Li, X.; Lehman, J. H.

    2018-06-01

    A primary-standard flowing-water optical power meter for measuring multi-kilowatt laser emission has been built and operated. The design and operational details of this primary standard are described, and a full uncertainty analysis is provided covering the measurement range from 1–10 kW with an expanded uncertainty of 1.2%. Validating measurements at 5 kW and 10 kW show agreement with other measurement techniques to within the measurement uncertainty. This work of the U.S. Government is not subject to U.S. copyright.

  8. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  9. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, M; Seuntjens, J; Roberge, D

    Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less

  10. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  11. The undatables: Quantifying uncertainty in a highly expanded Late Glacial-Holocene sediment sequence recovered from the deepest Baltic Sea basin—IODP Site M0063

    NASA Astrophysics Data System (ADS)

    Obrochta, S. P.; Andrén, T.; Fazekas, S. Z.; Lougheed, B. C.; Snowball, I.; Yokoyama, Y.; Miyairi, Y.; Kondo, R.; Kotilainen, A. T.; Hyttinen, O.; Fehr, A.

    2017-03-01

    Laminated, organic-rich silts and clays with high dissolved gas content characterize sediments at IODP Site M0063 in the Landsort Deep, which at 459 m is the deepest basin in the Baltic Sea. Cores recovered from Hole M0063A experienced significant expansion as gas was released during the recovery process, resulting in high sediment loss. Therefore, during operations at subsequent holes, penetration was reduced to 2 m per 3.3 m core, permitting expansion into 1.3 m of initially empty liner. Fully filled liners were recovered from Holes B through E, indicating that the length of recovered intervals exceeded the penetrated distance by a factor of >1.5. A typical down-core logarithmic trend in gamma density profiles, with anomalously low-density values within the upper ˜1 m of each core, suggests that expansion primarily occurred in this upper interval. Thus, we suggest that a simple linear correction is inappropriate. This interpretation is supported by anisotropy of magnetic susceptibility data that indicate vertical stretching in the upper ˜1.5 m of expanded cores. Based on the mean gamma density profiles of cores from Holes M0063C and D, we obtain an expansion function that is used to adjust the depth of each core to conform to its known penetration. The variance in these profiles allows for quantification of uncertainty in the adjusted depth scale. Using a number of bulk 14C dates, we explore how the presence of multiple carbon source pathways leads to poorly constrained radiocarbon reservoir age variability that significantly affects age and sedimentation rate calculations.

  12. How much is new information worth? Evaluating the financial benefit of resolving management uncertainty

    USGS Publications Warehouse

    Maxwell, Sean L.; Rhodes, Jonathan R.; Runge, Michael C.; Possingham, Hugh P.; Ng, Chooi Fei; McDonald Madden, Eve

    2015-01-01

    Conservation decision-makers face a trade-off between spending limited funds on direct management action, or gaining new information in an attempt to improve management performance in the future. Value-of-information analysis can help to resolve this trade-off by evaluating how much management performance could improve if new information was gained. Value-of-information analysis has been used extensively in other disciplines, but there are only a few examples where it has informed conservation planning, none of which have used it to evaluate the financial value of gaining new information. We address this gap by applying value-of-information analysis to the management of a declining koala Phascolarctos cinereuspopulation. Decision-makers responsible for managing this population face uncertainty about survival and fecundity rates, and how habitat cover affects mortality threats. The value of gaining new information about these uncertainties was calculated using a deterministic matrix model of the koala population to find the expected population growth rate if koala mortality threats were optimally managed under alternative model hypotheses, which represented the uncertainties faced by koala managers. Gaining new information about survival and fecundity rates and the effect of habitat cover on mortality threats will do little to improve koala management. Across a range of management budgets, no more than 1·7% of the budget should be spent on resolving these uncertainties. The value of information was low because optimal management decisions were not sensitive to the uncertainties we considered. Decisions were instead driven by a substantial difference in the cost efficiency of management actions. The value of information was up to forty times higher when the cost efficiencies of different koala management actions were similar. Synthesis and applications. This study evaluates the ecological and financial benefits of gaining new information to inform a conservation problem. We also theoretically demonstrate that the value of reducing uncertainty is highest when it is not clear which management action is the most cost efficient. This study will help expand the use of value-of-information analyses in conservation by providing a cost efficiency metric by which to evaluate research or monitoring.

  13. Smoke Flow Visualisation and Particle Image Velocimetry Measurements over a Generic Submarine Model

    DTIC Science & Technology

    2014-03-01

    Edisp), scaling uncertainty (Escale) and timing uncertainty (Etime), . tLX EEE u E t scale scaleX timescaledisp u u 222 222 2 2...this study may be calculated from [7] as, , EE upres λ=ω (C.3) UNCLASSIFIED DSTO-TR-2944 UNCLASSIFIED 46 where Eu is the total PIV velocity...uncertainty in the vorticity is calculated by, .22 biaspres EEE ωωω += (C.6) Where the total uncertainty in the vorticity is expressed as

  14. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    PubMed

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  15. A contribution to the calculation of measurement uncertainty and optimization of measuring strategies in coordinate measurement

    NASA Astrophysics Data System (ADS)

    Waeldele, F.

    1983-01-01

    The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.

  16. Uncertainty Calculations in the First Introductory Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Rahman, Shafiqur

    2005-03-01

    Uncertainty in a measured quantity is an integral part of reporting any experimental data. Consequently, Introductory Physics laboratories at many institutions require that students report the values of the quantities being measured as well as their uncertainties. Unfortunately, given that there are three main ways of calculating uncertainty, each suitable for particular situations (which is usually not explained in the lab manual), this is also an area that students feel highly confused about. It frequently generates large number of complaints in the end-of-the semester course evaluations. Students at some institutions are not asked to calculate uncertainty at all, which gives them a fall sense of the nature of experimental data. Taking advantage of the increased sophistication in the use of computers and spreadsheets that students are coming to college with, we have completely restructured our first Introductory Physics Lab to address this problem. Always in the context of a typical lab, we now systematically and sequentially introduce the various ways of calculating uncertainty including a theoretical understanding as opposed to a cookbook approach, all within the context of six three-hour labs. Complaints about the lab in student evaluations have dropped by 80%. * supported by a grant from A. V. Davis Foundation

  17. Equation of State for the Thermodynamic Properties of trans-1,3,3,3-Tetrafluoropropene [R-1234ze(E)

    NASA Astrophysics Data System (ADS)

    Thol, Monika; Lemmon, Eric W.

    2016-03-01

    An equation of state for the calculation of the thermodynamic properties of the hydrofluoroolefin refrigerant R-1234ze(E) is presented. The equation of state (EOS) is expressed in terms of the Helmholtz energy as a function of temperature and density. The formulation can be used for the calculation of all thermodynamic properties through the use of derivatives of the Helmholtz energy. Comparisons to experimental data are given to establish the uncertainty of the EOS. The equation of state is valid from the triple point (169 K) to 420 K, with pressures to 100 MPa. The uncertainty in density in the liquid and vapor phases is 0.1 % from 200 K to 420 K at all pressures. The uncertainty increases outside of this temperature region and in the critical region. In the gaseous phase, speeds of sound can be calculated with an uncertainty of 0.05 %. In the liquid phase, the uncertainty in speed of sound increases to 0.1 %. The estimated uncertainty for liquid heat capacities is 5 %. The uncertainty in vapor pressure is 0.1 %.

  18. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    PubMed

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  19. [Evaluation of measurement uncertainty of welding fume in welding workplace of a shipyard].

    PubMed

    Ren, Jie; Wang, Yanrang

    2015-12-01

    To evaluate the measurement uncertainty of welding fume in the air of the welding workplace of a shipyard, and to provide quality assurance for measurement. According to GBZ/T 192.1-2007 "Determination of dust in the air of workplace-Part 1: Total dust concentration" and JJF 1059-1999 "Evaluation and expression of measurement uncertainty", the uncertainty for determination of welding fume was evaluated and the measurement results were completely described. The concentration of welding fume was 3.3 mg/m(3), and the expanded uncertainty was 0.24 mg/m(3). The repeatability for determination of dust concentration introduced an uncertainty of 1.9%, the measurement using electronic balance introduced a standard uncertainty of 0.3%, and the measurement of sample quality introduced a standard uncertainty of 3.2%. During the determination of welding fume, the standard uncertainty introduced by the measurement of sample quality is the dominant uncertainty. In the process of sampling and measurement, quality control should be focused on the collection efficiency of dust, air humidity, sample volume, and measuring instruments.

  20. Uncertainty analysis of absorbed dose calculations from thermoluminescence dosimeters.

    PubMed

    Kirby, T H; Hanson, W F; Johnston, D A

    1992-01-01

    Thermoluminescence dosimeters (TLD) are widely used to verify absorbed doses delivered from radiation therapy beams. Specifically, they are used by the Radiological Physics Center for mailed dosimetry for verification of therapy machine output. The effects of the random experimental uncertainties of various factors on dose calculations from TLD signals are examined, including: fading, dose response nonlinearity, and energy response corrections; reproducibility of TL signal measurements and TLD reader calibration. Individual uncertainties are combined to estimate the total uncertainty due to random fluctuations. The Radiological Physics Center's (RPC) mail out TLD system, utilizing throwaway LiF powder to monitor high-energy photon and electron beam outputs, is analyzed in detail. The technique may also be applicable to other TLD systems. It is shown that statements of +/- 2% dose uncertainty and +/- 5% action criterion for TLD dosimetry are reasonable when related to uncertainties in the dose calculations, provided the standard deviation (s.d.) of TL readings is 1.5% or better.

  1. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Díez, C.J., E-mail: cj.diez@upm.es; Cabellos, O.; Instituto de Fusión Nuclear, Universidad Politécnica de Madrid, 28006 Madrid

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has tomore » be performed in order to analyse the limitations of using one-group uncertainties.« less

  2. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Cabellos, O.; Martínez, J. S.

    2015-01-01

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.

  3. Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core

    NASA Astrophysics Data System (ADS)

    Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.

    2017-01-01

    The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.

  4. Analysis of the uncertainties in the physical calculations of water-moderated power reactors of the VVER type by the parameters of models of preparing few-group constants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryukhin, V. V., E-mail: bryuhin@yandex.ru; Kurakin, K. Yu.; Uvakin, M. A.

    The article covers the uncertainty analysis of the physical calculations of the VVER reactor core for different meshes of the reference values of the feedback parameters (FBP). Various numbers of nodes of the parametric axes of FBPs and different ranges between them are investigated. The uncertainties of the dynamic calculations are analyzed using RTS RCCA ejection as an example within the framework of the model with the boundary conditions at the core inlet and outlet.

  5. Communicating climate science to a suspicious public: How best to explain what we know?

    NASA Astrophysics Data System (ADS)

    Conway, E. M.; Jackson, R.

    2014-12-01

    In 2007, the Jet Propulsion Laboratory decided to establish a climate science website aimed at explaining what scientists know about climate science, and what they don't, to the English-speaking public. Because of my prior work in the history of atmospheric and climate sciences, I was asked to help choose the data that would be displayed on the site and to write the basic text. Our site went "live" in 2008, and quickly attracted both widespread media attention and sponsorship from NASA, which funded us to expand it into the NASA Climate Change website, climate.nasa.gov. It's now generally the 3rd or 4th ranked climate change website in Google rankings. A perusal of the NASA Climate Change website will reveal that the word "uncertainty" does not appear in its explanatory essays. "Uncertainty," in science, is a calculated quantity. To calculate it, one must know quite a bit about the phenomenon in question. In vernacular use, "uncertainty" means something like "stuff we don't know." These are radically different meanings, and yet scientists and their institutions routinely use both meanings without clarification. Even without the deliberate disinformation campaigns that Oreskes and Conway have documented in Merchants of Doubt, scientists' own misuse of this one word would produce public confusion. We chose to use other words to overcome this one communications problem. But other aspects of the climate communications problem cannot be so easily overcome in a context of Federal agency communications. In this paper, we'll review recent research on ways to improve public understanding of science, and set it against the restrictions that exist on Federal agency communications—avoidance of political statements and interpretation, focusing on fact over storytelling, narrowness of context—to help illuminate the difficulty of improving public understanding of complex, policy-relevant phenomenon like climate change.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Andreas, E-mail: andreas.schueller@ptb.de; Meier, Markus; Selbach, Hans-Joachim

    Purpose: The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor k{sub Q} can be determined in order to measure the reference air kerma rate of {sup 60}Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for {sup 192}Ir HDR sources. Methods: The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of {sup 60}Co and {sup 192}Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor k{sub Q} was determined as the ratio of the calibration coefficients for {supmore » 60}Co and {sup 192}Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. Results: For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor k{sub Q} is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor k{sub Q} is 1.05. Both k{sub Q} values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of k{sub Q} is U{sub k{sub Q}} = 2.1% for both chamber types. Conclusions: The calibration coefficient of a well-type chamber for radiation fields of {sup 60}Co HDR brachytherapy sources can be calculated from a given calibration coefficient for {sup 192}Ir radiation by using a chamber-type-specific radiation quality correction factor k{sub Q}. However, the uncertainty of a {sup 60}Co calibration coefficient calculated via k{sub Q} is at least twice as large as that for a direct calibration with a {sup 60}Co source.« less

  7. A radiation quality correction factor k for well-type ionization chambers for the measurement of the reference air kerma rate of (60)Co HDR brachytherapy sources.

    PubMed

    Schüller, Andreas; Meier, Markus; Selbach, Hans-Joachim; Ankerhold, Ulrike

    2015-07-01

    The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor kQ can be determined in order to measure the reference air kerma rate of (60)Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for (192)Ir HDR sources. The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of (60)Co and (192)Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor kQ was determined as the ratio of the calibration coefficients for (60)Co and (192)Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor kQ is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor kQ is 1.05. Both kQ values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of kQ is UkQ = 2.1% for both chamber types. The calibration coefficient of a well-type chamber for radiation fields of (60)Co HDR brachytherapy sources can be calculated from a given calibration coefficient for (192)Ir radiation by using a chamber-type-specific radiation quality correction factor kQ. However, the uncertainty of a (60)Co calibration coefficient calculated via kQ is at least twice as large as that for a direct calibration with a (60)Co source.

  8. Development of Argon Isotope Reference Standards for the U.S. Geological Survey

    PubMed Central

    Miiller, Archie P.

    2006-01-01

    The comparison of physical ages of geological materials measured by laboratories engaged in geochronological studies has been limited by the accuracy of mineral standards or monitors for which reported ages have differed by as much as 2 %. In order to address this problem, the U.S. Geological Survey is planning to calibrate the conventional 40Ar/40K age of a new preparation of an international hornblende standard labeled MMhb-2. The 40K concentration in MMhb-2 has already been determined by the Analytical Chemistry Division at NIST with an uncertainty of 0.2 %. The 40Ar concentration will be measured by the USGS using the argon isotope reference standards that were recently developed by NIST and are described in this paper. The isotope standards were constructed in the form of pipette/reservoir systems and calibrated by gas expansion techniques to deliver small high-precision aliquots of high-purity argon. Two of the pipette systems will deliver aliquots of 38Ar having initial molar quantities of 1.567 × 10−10 moles and 2.313 × 10−10 moles with expanded (k = 2) uncertainties of 0.058 % and 0.054 %, respectively. Three other pipette systems will deliver aliquots (nominally 4 × 10−10 moles) of 40Ar:36Ar artificial mixtures with similar accuracy and with molar ratios of 0.9974 ± 0.06 %, 29.69 ± 0.06 %, and 285.7 ± 0.08 % (k = 2). These isotope reference standards will enable the USGS to measure the 40Ar concentration in MMhb-2 with an expanded uncertainty of ≈ 0.1 %. In the process of these measurements, the USGS will re-determine the isotopic composition of atmospheric Ar and calculate a new value for its atomic weight. Upon completion of the USGS calibrations, the MMhb-2 mineral standard will be certified by NIST for its K and Ar concentrations and distributed as a Standard Reference Material (SRM). The new SRM and the NIST-calibrated transportable pipette systems have the potential for dramatically improving the accuracy of interlaboratory calibrations and thereby the measured ages of geological materials, by as much as a factor of ten. PMID:27274937

  9. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  10. Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honda, M.; Kajita, T.; Kasahara, K.

    2007-02-15

    Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using othermore » interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied.« less

  11. Effective UV radiation from model calculations and measurements

    NASA Technical Reports Server (NTRS)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  12. First-Principles Calculation of the Third Virial Coefficient of Helium

    PubMed Central

    Garberoglio, Giovanni; Harvey, Allan H.

    2009-01-01

    Knowledge of the pair and three-body potential-energy surfaces of helium is now sufficient to allow calculation of the third density virial coefficient, C(T), with significantly smaller uncertainty than that of existing experimental data. In this work, we employ the best available pair and three-body potentials for helium and calculate C(T) with path-integral Monte Carlo (PIMC) calculations supplemented by semiclassical calculations. The values of C(T) presented extend from 24.5561 K to 10 000 K. In the important metrological range of temperatures near 273.16 K, our uncertainties are smaller than the best experimental results by approximately an order of magnitude, and the reduction in uncertainty at other temperatures is at least as great. For convenience in calculation of C(T) and its derivatives, a simple correlating equation is presented. PMID:27504226

  13. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  14. A systematic uncertainty analysis of an evaluative fate and exposure model.

    PubMed

    Hertwich, E G; McKone, T E; Pease, W S

    2000-08-01

    Multimedia fate and exposure models are widely used to regulate the release of toxic chemicals, to set cleanup standards for contaminated sites, and to evaluate emissions in life-cycle assessment. CalTOX, one of these models, is used to calculate the potential dose, an outcome that is combined with the toxicity of the chemical to determine the Human Toxicity Potential (HTP), used to aggregate and compare emissions. The comprehensive assessment of the uncertainty in the potential dose calculation in this article serves to provide the information necessary to evaluate the reliability of decisions based on the HTP A framework for uncertainty analysis in multimedia risk assessment is proposed and evaluated with four types of uncertainty. Parameter uncertainty is assessed through Monte Carlo analysis. The variability in landscape parameters is assessed through a comparison of potential dose calculations for different regions in the United States. Decision rule uncertainty is explored through a comparison of the HTP values under open and closed system boundaries. Model uncertainty is evaluated through two case studies, one using alternative formulations for calculating the plant concentration and the other testing the steady state assumption for wet deposition. This investigation shows that steady state conditions for the removal of chemicals from the atmosphere are not appropriate and result in an underestimate of the potential dose for 25% of the 336 chemicals evaluated.

  15. Approach to determine measurement uncertainty in complex nanosystems with multiparametric dependencies and multivariate output quantities

    NASA Astrophysics Data System (ADS)

    Hampel, B.; Liu, B.; Nording, F.; Ostermann, J.; Struszewski, P.; Langfahl-Klabes, J.; Bieler, M.; Bosse, H.; Güttler, B.; Lemmens, P.; Schilling, M.; Tutsch, R.

    2018-03-01

    In many cases, the determination of the measurement uncertainty of complex nanosystems provides unexpected challenges. This is in particular true for complex systems with many degrees of freedom, i.e. nanosystems with multiparametric dependencies and multivariate output quantities. The aim of this paper is to address specific questions arising during the uncertainty calculation of such systems. This includes the division of the measurement system into subsystems and the distinction between systematic and statistical influences. We demonstrate that, even if the physical systems under investigation are very different, the corresponding uncertainty calculation can always be realized in a similar manner. This is exemplarily shown in detail for two experiments, namely magnetic nanosensors and ultrafast electro-optical sampling of complex time-domain signals. For these examples the approach for uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) is explained, in which correlations between multivariate output quantities are captured. To illustate the versatility of the proposed approach, its application to other experiments, namely nanometrological instruments for terahertz microscopy, dimensional scanning probe microscopy, and measurement of concentration of molecules using surface enhanced Raman scattering, is shortly discussed in the appendix. We believe that the proposed approach provides a simple but comprehensive orientation for uncertainty calculation in the discussed measurement scenarios and can also be applied to similar or related situations.

  16. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty magnitude and bias, and to test how uncertainty depended on the density of the raingauge network and flow gauging station characteristics. The uncertainties were sometimes large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. Uncertainty in the mean discharge was around ±10% for both catchments, while signatures describing the flow variability had much higher uncertainties in the Mahurangi where there was a fast rainfall-runoff response and greater high-flow rating uncertainty. Event and total runoff ratios had uncertainties from ±10% to ±15% depending on the number of rain gauges used; precipitation uncertainty was related to interpolation rather than point uncertainty. Uncertainty distributions in these signatures were skewed, and meant that differences in signature values between these catchments were often not significant. We hope that this study encourages others to use signatures in a way that is robust to data uncertainty.

  17. Monte Carlo calculations of k{sub Q}, the beam quality conversion factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B. R.; Rogers, D. W. O.

    2010-11-15

    Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k{sub Q}, for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k{sub Q}. These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs{sub c}hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to amore » small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k{sub Q} is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k{sub Q} factors as a function of beam quality expressed as %dd(10){sub x} and TPR{sub 10}{sup 20} are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k{sub Q} values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k{sub Q} directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more accurate than previously used values. For small ionization chambers with central electrodes composed of high-Z materials, the effect of the central electrode is much larger than that for the aluminum electrodes in Farmer chambers.« less

  18. A detailed description of the uncertainty analysis for high area ratio rocket nozzle tests at the NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac

    1987-01-01

    A preliminary uncertainty analysis was performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis is presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.

  19. A detailed description of the uncertainty analysis for High Area Ratio Rocket Nozzle tests at the NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac

    1987-01-01

    A preliminary uncertainty analysis has been performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis are presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.

  20. The Interplay between Uncertainty Monitoring and Working Memory: Can Metacognition Become Automatic?

    PubMed Central

    Coutinho, Mariana V. C.; Redford, Joshua S.; Church, Barbara A.; Zakrzewski, Alexandria C.; Couchman, Justin J.; Smith, J. David

    2016-01-01

    The uncertainty response has grounded the study of metacognition in nonhuman animals. Recent research has explored the processes supporting uncertainty monitoring in monkeys. It revealed that uncertainty responding in contrast to perceptual responding depends on significant working memory resources. The aim of the present study was to expand this research by examining whether uncertainty monitoring is also working memory demanding in humans. To explore this issue, human participants were tested with or without a cognitive load on a psychophysical discrimination task including either an uncertainty response (allowing the decline of difficult trials) or a middle-perceptual response (labeling the same intermediate trial levels). The results demonstrated that cognitive load reduced uncertainty responding, but increased middle responding. However, this dissociation between uncertainty and middle responding was only observed when participants either lacked training or had very little training with the uncertainty response. If more training was provided, the effect of load was small. These results suggest that uncertainty responding is resource demanding, but with sufficient training, human participants can respond to uncertainty either by using minimal working memory resources or effectively sharing resources. These results are discussed in relation to the literature on animal and human metacognition. PMID:25971878

  1. Uncertainty, robustness, and the value of information in managing an expanding Arctic goose population

    USGS Publications Warehouse

    Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.

    2014-01-01

    We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Chanyoung; Kim, Nam H.

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  3. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  4. Impact of Pitot tube calibration on the uncertainty of water flow rate measurement

    NASA Astrophysics Data System (ADS)

    de Oliveira Buscarini, Icaro; Costa Barsaglini, Andre; Saiz Jabardo, Paulo Jose; Massami Taira, Nilson; Nader, Gilder

    2015-10-01

    Water utility companies often use Cole type Pitot tubes to map velocity profiles and thus measure flow rate. Frequent monitoring and measurement of flow rate is an important step in identifying leaks and other types of losses. In Brazil losses as high as 42% are common and in some places even higher values are found. When using Cole type Pitot tubes to measure the flow rate, the uncertainty of the calibration coefficient (Cd) is a major component of the overall flow rate measurement uncertainty. A common practice is to employ the usual value Cd = 0.869, in use since Cole proposed his Pitot tube in 1896. Analysis of 414 calibrations of Cole type Pitot tubes show that Cd varies considerably and values as high 0.020 for the expanded uncertainty are common. Combined with other uncertainty sources, the overall velocity measurement uncertainty is 0.02, increasing flowrate measurement uncertainty by 1.5% which, for the Sao Paulo metropolitan area (Brazil) corresponds to 3.5 × 107 m3/year.

  5. CCQM-K102: polybrominated diphenyl ethers in sediment

    NASA Astrophysics Data System (ADS)

    Ricci, Marina; Shegunova, Penka; Conneely, Patrick; Becker, Roland; Maldonado Torres, Mauricio; Arce Osuna, Mariana; On, Tang Po; Man, Lee Ho; Baek, Song-Yee; Kim, Byungjoo; Hopley, Christopher; Liscio, Camilla; Warren, John; Le Diouron, Véronique; Lardy-Fontan, Sophie; Lalere, Béatrice; Mingwu, Shao; Kucklick, John; Vamathevan, Veronica; Matsuyama, Shigetomo; Numata, Masahiko; Brits, Martin; Quinn, Laura; Fernandes-Whaley, Maria; Ceyhan Gören, Ahmet; Binici, Burcu; Konopelko, Leonid; Krylov, Anatoli; Mikheeva, Alena

    2017-01-01

    The key comparison CCQM-K102: Polybrominated diphenyl ethers in sediment was coordinated by the JRC, Directorate F - Health, Consumers & Reference Materials, Geel (Belgium) under the auspices of the Organic Analysis Working Group (OAWG) of the Comité Consultatif pour la Quantité de Matière (CCQM). Thirteen National Metrology institutes or Designated Institutes and the JRC participated. Participants were requested to report the mass fraction (on a dry mass basis) of BDE 47, 99 and 153 in the freshwater sediment study material. The sediment originated from a river in Belgium and contained PBDEs (and other pollutants) at levels commonly found in environmental samples. The comparison was designed to demonstrate participants' capability of analysing non-polar organic molecules in abiotic dried matrices (approximate range of molecular weights: 100 to 800 g/mol, polarity corresponding to pKow < -2, range of mass fraction: 1-1000 μg/kg). All participants (except one using ultrasonic extraction) applied Pressurised Liquid Extraction or Soxhlet, while the instrumental analysis was performed with GC-MS/MS, GC-MS or GC-HRMS. Isotope Dilution Mass Spectrometry approach was used for quantification (except in one case). The assigned Key Comparison Reference Values (KCRVs) were the medians of thirteen results for BDE 47 and eleven results for BDE 99 and 153, respectively. BDE 47 was assigned a KCRV of 15.60 μg/kg with a combined standard uncertainty of 0.41 μg/kg, BDE 99 was assigned a KCRV of 33.69 μg/kg with a combined standard uncertainty of 0.81 μg/kg and BDE 153 was assigned a KCRV of 6.28 μg/kg with a combined standard uncertainty of 0.28 μg/kg. The k-factor for the estimation of the expanded uncertainty of the KCRVs was chosen as k = 2. The degree of equivalence (with the KCRV) and its uncertainty were calculated for each result. Most of the participants to CCQM-K102 were able to demonstrate or confirm their capabilities in the analysis of non-polar organic molecules in abiotic dried matrices. Throughout the study it became clear that matrix interferences can influence the accurate quantification of the PBDEs, if the analytical methodology applied is not appropriately adapted and optimised. This comparison shows that quantification of PBDEs at the μg/kg low-middle range in a challenging environmental abiotic dried matrix can be achieved with relative expanded uncertainties below 15 % (more than 70 % of participating laboratories), well in line with the best measurement performances in the environmental analysis field. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  6. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  7. Calculation of the compounded uncertainty of 14C AMS measurements

    NASA Astrophysics Data System (ADS)

    Nadeau, Marie-Josée; Grootes, Pieter M.

    2013-01-01

    The correct method to calculate conventional 14C ages from the carbon isotopic ratios was summarised 35 years ago by Stuiver and Polach (1977) and is now accepted as the only method to calculate 14C ages. There is, however, no consensus regarding the treatment of AMS data, mainly of the uncertainty of the final result. The estimation and treatment of machine background, process blank, and/or in situ contamination is not uniform between laboratories, leading to differences in 14C results, mainly for older ages. As Donahue (1987) and Currie (1994), among others, mentioned, some laboratories find it important to use the scatter of several measurements as uncertainty while others prefer to use Poisson statistics. The contribution of the scatter of the standards, machine background, process blank, and in situ contamination to the uncertainty of the final 14C result is also treated in different ways. In the early years of AMS, several laboratories found it important to describe their calculation process in details. In recent years, this practise has declined. We present an overview of the calculation process for 14C AMS measurements looking at calculation practises published from the beginning of AMS until present.

  8. Tissue Expanders and Proton Beam Radiotherapy: What You Need to Know

    PubMed Central

    Howarth, Ashley L.; Niska, Joshua R.; Brooks, Kenneth; Anand, Aman; Bues, Martin; Vargas, Carlos E.

    2017-01-01

    Summary: Proton beam radiotherapy (PBR) has gained acceptance for the treatment of breast cancer because of unique beam characteristics that allow superior dose distributions with optimal dose to the target and limited collateral damage to adjacent normal tissue, especially to the heart and lungs. To determine the compatibility of breast tissue expanders (TEs) with PBR, we evaluated the structural and dosimetric properties of 2 ex vivo models: 1 model with internal struts and another model without an internal structure. Although the struts appeared to have minimal impact, we found that the metal TE port alters PBR dynamics, which may increase proton beam range uncertainty. Therefore, submuscular TE placement may be preferable to subcutaneous TE placement to reduce the interaction of the TE and proton beam. This will reduce range uncertainty and allow for more ideal radiation dose distribution. PMID:28740794

  9. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  10. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  11. SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, D; Spaans, J; Kumaraswamy, L

    Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less

  12. Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"

    NASA Astrophysics Data System (ADS)

    Lovell-Smith, J. W.; Saunders, P.; Feistel, R.

    2017-10-01

    Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.

  13. SU-F-T-151: Measurement Evaluation of Skin Dose in Scanning Proton Beam Therapy for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, J; Nichols, E; Strauss, D

    Purpose: To measure the skin dose and compare it with the calculated dose from a treatment planning system (TPS) for breast cancer treatment using scanning proton beam therapy (SPBT). Methods: A single en-face-beam SPBT plan was generated by a commercial TPS for two breast cancer patients. The treatment volumes were the entire breasts (218 cc and 1500 cc) prescribed to 50.4 Gy (RBE) in 28 fractions. A range shifter of 5 cm water equivalent thickness was used. The organ at risk (skin) was defined to be 5 mm thick from the surface. The skin doses were measured in water withmore » an ADCL calibrated parallel plate (PP) chamber. The measured data were compared with the values calculated in the TPS. Skin dose calculations can be subject to uncertainties created by the definition of the external contour and the limitations of the correction based algorithms, such as proton convolution superposition. Hence, the external contours were expanded by 0, 3 mm and 1 cm to include additional pixels for dose calculation. In addition, to examine the effects of the cloth gown on the skin dose, the skin dose measurements were conducted with and without gown. Results: On average the measured skin dose was 4% higher than the calculated values. At deeper depths, the measured and calculated doses were in better agreement (< 2%). Large discrepancy occur for the dose calculated without external expansion due to volume averaging. The addition of the gown only increased the measured skin dose by 0.4%. Conclusion: The implemented TPS underestimated the skin dose for breast treatments. Superficial dose calculation without external expansion would result in large errors for SPBT for breast cancer.« less

  14. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  15. Optimized pulses for the control of uncertain qubits

    DOE PAGES

    Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...

    2012-05-18

    The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less

  16. Geometrical Characterisation of a 2D Laser System and Calibration of a Cross-Grid Encoder by Means of a Self-Calibration Methodology

    PubMed Central

    Torralba, Marta; Díaz-Pérez, Lucía C.

    2017-01-01

    This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239

  17. Uncertainties in cylindrical anode current inferences on pulsed power drivers

    NASA Astrophysics Data System (ADS)

    Porwitzky, Andrew; Brown, Justin

    2018-06-01

    For over a decade, velocimetry based techniques have been used to infer the electrical current delivered to dynamic materials properties experiments on pulsed power drivers such as the Z Machine. Though originally developed for planar load geometries, in recent years, inferring the current delivered to cylindrical coaxial loads has become a valuable diagnostic tool for numerous platforms. Presented is a summary of uncertainties that can propagate through the current inference technique when applied to expanding cylindrical anodes. An equation representing quantitative uncertainty is developed which shows the unfold method to be accurate to a few percent above 10 MA of load current.

  18. Non-intrusive torque measurement for rotating shafts using optical sensing of zebra-tapes

    NASA Astrophysics Data System (ADS)

    Zappalá, D.; Bezziccheri, M.; Crabtree, C. J.; Paone, N.

    2018-06-01

    Non-intrusive, reliable and precise torque measurement is critical to dynamic performance monitoring, control and condition monitoring of rotating mechanical systems. This paper presents a novel, contactless torque measurement system consisting of two shaft-mounted zebra tapes and two optical sensors mounted on stationary rigid supports. Unlike conventional torque measurement methods, the proposed system does not require costly embedded sensors or shaft-mounted electronics. Moreover, its non-intrusive nature, adaptable design, simple installation and low cost make it suitable for a large variety of advanced engineering applications. Torque measurement is achieved by estimating the shaft twist angle through analysis of zebra tape pulse train time shifts. This paper presents and compares two signal processing methods for torque measurement: rising edge detection and cross-correlation. The performance of the proposed system has been proven experimentally under both static and variable conditions and both processing approaches show good agreement with reference measurements from an in-line, invasive torque transducer. Measurement uncertainty has been estimated according to the ISO GUM (Guide to the expression of uncertainty in measurement). Type A analysis of experimental data has provided an expanded uncertainty relative to the system full-scale torque of  ±0.30% and  ±0.86% for the rising edge and cross-correlation approaches, respectively. Statistical simulations performed by the Monte Carlo method have provided, in the worst case, an expanded uncertainty of  ±1.19%.

  19. Contribution of ICP-IDMS to the certification of antimony implanted in a silicon wafer--comparison with RBS and INAA results.

    PubMed

    Pritzkow, W; Vogl, J; Berger, A; Ecker, K; Grötzschel, R; Klingbeil, P; Persson, L; Riebe, G; Wätjen, U

    2001-11-01

    A thin-layer reference material for surface and near-surface analytical methods was produced and certified. The surface density of the implanted Sb layer was determined by Rutherford backscattering spectrometry (RBS), instrumental neutron activation analysis (INAA), and inductively coupled plasma isotope dilution mass spectrometry (ICP-IDMS) equipped with a multi-collector. The isotopic abundances of Sb (121Sb and 123Sb) were determined by multi-collector ICP-MS and INAA. ICP-IDMS measurements are discussed in detail in this paper. All methods produced values traceable to the SI and are accompanied by a complete uncertainty budget. The homogeneity of the material was measured with RBS. From these measurements the standard uncertainty due to possible inhomogeneities was estimated to be less than 0.78% for fractions of the area increments down to 0.75 mm2 in size. Excellent agreement between the results of the three different methods was found. For the surface density of implanted Sb atoms the unweighted mean value of the means of four data sets is 4.81 x 10(16) cm(-2) with an expanded uncertainty (coverage factor k = 2) of 0.09 x 10(16) cm(-2). For the isotope amount ratio R (121Sb/123Sb) the unweighted mean value of the means of two data sets is 1.435 with an expanded uncertainty (coverage factor k = 2) of 0.006.

  20. Evaluation of hydrocarbon flow standard facility equipped with double-wing diverter using four types of working liquids

    NASA Astrophysics Data System (ADS)

    Doihara, R.; Shimada, T.; Cheong, K. H.; Terao, Y.

    2017-06-01

    A flow calibration facility based on the gravimetric method using a double-wing diverter for hydrocarbon flows from 0.1 m3 h-1 to 15 m3 h-1 was constructed as a national measurement standard in Japan. The original working liquids were kerosene and light oil. The calibration facility was modified to calibrate flowmeters with two additional working liquids, industrial gasoline (flash point  >  40 °C) and spindle oil, to achieve calibration over a wide viscosity range at the same calibration test rig. The kinematic viscosity range is 1.2 mm2 s-1 to 24 mm2 s-1. The contributions to the measurement uncertainty due to different types of working liquids were evaluated experimentally in this study. The evaporation error was reduced by using a seal system at the weighing tank inlet. The uncertainty due to droplets from the diverter wings was reduced by a modified diverter operation. The diverter timing errors for all types of working liquids were estimated. The expanded uncertainties for the calibration facility were estimated to be 0.020% for mass flow and 0.030% for volumetric flow for all considered types of liquids. Internal comparisons with other calibration facilities were also conducted, and the agreement was confirmed to be within the claimed expanded uncertainties.

  1. Safety envelope for load tolerance of structural element design based on multi-stage testing

    DOE PAGES

    Park, Chanyoung; Kim, Nam H.

    2016-09-06

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  2. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  3. Evaluation of the combined measurement uncertainty in isotope dilution by MC-ICP-MS.

    PubMed

    Fortunato, G; Wunderli, S

    2003-09-01

    The combination of metrological weighing, the measurement of isotope amount ratios by a multicollector inductively coupled plasma mass spectrometer (MC-ICP-MS) and the use of high-purity reference materials are the cornerstones to achieve improved results for the amount content of lead in wine by the reversed isotope dilution technique. Isotope dilution mass spectrometry (IDMS) and reversed IDMS have the potential to be a so-called primary method, with which close comparability and well-stated combined measurement uncertainties can be obtained. This work describes the detailed uncertainty budget determination using the ISO-GUM approach. The traces of lead in wine were separated from the matrix by ion exchange chromatography after HNO(3)/H(2)O(2) microwave digestion. The thallium isotope amount ratio ( n((205)Tl)/ n((203)Tl)) was used to correct for mass discrimination using an exponential model approach. The corrected lead isotope amount ratio n((206)Pb)/ n((208)Pb) for the isotopic standard SRM 981 measured in our laboratory was compared with ratio values considered to be the least uncertain. The result has been compared in a so-called pilot study "lead in wine" organised by the CCQM (Comité Consultatif pour la Quantité de Matière, BIPM, Paris; the highest measurement authority for analytical chemical measurements). The result for the lead amount content k(Pb) and the corresponding expanded uncertainty U given by our laboratory was:k(Pb)=1.329 x 10-10mol g-1 (amount content of lead in wine)U[k(Pb)]=1.0 x 10-12mol g-1 (expanded uncertainty U=kxuc, k=2)The uncertainty of the main influence parameter of the combined measurement uncertainty was determined to be the isotope amount ratio R(206,B) of the blend between the enriched spike and the sample.

  4. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  5. A taxonomy of medical uncertainties in clinical genome sequencing.

    PubMed

    Han, Paul K J; Umstead, Kendall L; Bernhardt, Barbara A; Green, Robert C; Joffe, Steven; Koenig, Barbara; Krantz, Ian; Waterston, Leo B; Biesecker, Leslie G; Biesecker, Barbara B

    2017-08-01

    Clinical next-generation sequencing (CNGS) is introducing new opportunities and challenges into the practice of medicine. Simultaneously, these technologies are generating uncertainties of an unprecedented scale that laboratories, clinicians, and patients are required to address and manage. We describe in this report the conceptual design of a new taxonomy of uncertainties around the use of CNGS in health care. Interviews to delineate the dimensions of uncertainty in CNGS were conducted with genomics experts and themes were extracted in order to expand on a previously published three-dimensional taxonomy of medical uncertainty. In parallel, we developed an interactive website to disseminate the CNGS taxonomy to researchers and engage them in its continued refinement. The proposed taxonomy divides uncertainty along three axes-source, issue, and locus-and further discriminates the uncertainties into five layers with multiple domains. Using a hypothetical clinical example, we illustrate how the taxonomy can be applied to findings from CNGS and used to guide stakeholders through interpretation and implementation of variant results. The utility of the proposed taxonomy lies in promoting consistency in describing dimensions of uncertainty in publications and presentations, to facilitate research design and management of the uncertainties inherent in the implementation of CNGS.Genet Med advance online publication 19 January 2017.

  6. A Taxonomy of Medical Uncertainties in Clinical Genome Sequencing

    PubMed Central

    Han, Paul K. J.; Umstead, Kendall L.; Bernhardt, Barbara A.; Green, Robert C.; Joffe, Steven; Koenig, Barbara; Krantz, Ian; Waterston, Leo B.; Biesecker, Leslie G.; Biesecker, Barbara B.

    2017-01-01

    Purpose Clinical next generation sequencing (CNGS) is introducing new opportunities and challenges into the practice of medicine. Simultaneously, these technologies are generating uncertainties of unprecedented scale that laboratories, clinicians, and patients are required to address and manage. We describe in this report the conceptual design of a new taxonomy of uncertainties around the use of CNGS in health care. Methods Interviews to delineate the dimensions of uncertainty in CNGS were conducted with genomics experts, and themes were extracted in order to expand upon a previously published three-dimensional taxonomy of medical uncertainty. In parallel we developed an interactive website to disseminate the CNGS taxonomy to researchers and engage them in its continued refinement. Results The proposed taxonomy divides uncertainty along three axes: source, issue, and locus, and further discriminates the uncertainties into five layers with multiple domains. Using a hypothetical clinical example, we illustrate how the taxonomy can be applied to findings from CNGS and used to guide stakeholders through interpretation and implementation of variant results. Conclusion The utility of the proposed taxonomy lies in promoting consistency in describing dimensions of uncertainty in publications and presentations, to facilitate research design and management of the uncertainties inherent in the implementation of CNGS. PMID:28102863

  7. Sampling in freshwater environments: suspended particle traps and variability in the final data.

    PubMed

    Barbizzi, Sabrina; Pati, Alessandra

    2008-11-01

    This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.

  8. Facility Measurement Uncertainty Analysis at NASA GRC

    NASA Technical Reports Server (NTRS)

    Stephens, Julia; Hubbard, Erin

    2016-01-01

    This presentation provides and overview of the measurement uncertainty analysis currently being implemented in various facilities at NASA GRC. This presentation includes examples pertinent to the turbine engine community (mass flow and fan efficiency calculation uncertainties.

  9. DecisionMaker software and extracting fuzzy rules under uncertainty

    NASA Technical Reports Server (NTRS)

    Walker, Kevin B.

    1992-01-01

    Knowledge acquisition under uncertainty is examined. Theories proposed in deKorvin's paper 'Extracting Fuzzy Rules Under Uncertainty and Measuring Definability Using Rough Sets' are discussed as they relate to rule calculation algorithms. A data structure for holding an arbitrary number of data fields is described. Limitations of Pascal for loops in the generation of combinations are also discussed. Finally, recursive algorithms for generating all possible combination of attributes and for calculating the intersection of an arbitrary number of fuzzy sets are presented.

  10. Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.

    PubMed

    Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F

    Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.

  11. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  12. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  13. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less

  14. Measurements of fusion neutron yields by neutron activation technique: Uncertainty due to the uncertainty on activation cross-sections

    NASA Astrophysics Data System (ADS)

    Stankunas, Gediminas; Batistoni, Paola; Sjöstrand, Henrik; Conroy, Sean; JET Contributors

    2015-07-01

    The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.

  15. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santamarina, A.; Bernard, D.; Dos Santos, N.

    This paper describes the method to define relevant targeted integral measurements that allow the improvement of nuclear data evaluations and the determination of corresponding reliable covariances. {sup 235}U and {sup 56}Fe examples are pointed out for the improvement of JEFF3 data. Utilizations of these covariances are shown for Sensitivity and Representativity studies, Uncertainty calculations, and Transposition of experimental results to industrial applications. S/U studies are more and more used in Reactor Physics and Safety-Criticality. However, the reliability of study results relies strongly on the ND covariance relevancy. Our method derives the real uncertainty associated with each evaluation from calibration onmore » targeted integral measurements. These realistic covariance matrices allow reliable JEFF3.1.1 calculation of prior uncertainty due to nuclear data, as well as uncertainty reduction based on representative integral experiments, in challenging design calculations such as GEN3 and RJH reactors.« less

  17. Kiwi: An Evaluated Library of Uncertainties in Nuclear Data and Package for Nuclear Sensitivity Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruet, J

    2007-06-23

    This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directorymore » structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.« less

  18. Calibration strategies for the determination of stable carbon absolute isotope ratios in a glycine candidate reference material by elemental analyser-isotope ratio mass spectrometry.

    PubMed

    Dunn, Philip J H; Malinovsky, Dmitry; Goenaga-Infante, Heidi

    2015-04-01

    We report a methodology for the determination of the stable carbon absolute isotope ratio of a glycine candidate reference material with natural carbon isotopic composition using EA-IRMS. For the first time, stable carbon absolute isotope ratios have been reported using continuous flow rather than dual inlet isotope ratio mass spectrometry. Also for the first time, a calibration strategy based on the use of synthetic mixtures gravimetrically prepared from well characterised, highly (13)C-enriched and (13)C-depleted glycines was developed for EA-IRMS calibration and generation of absolute carbon isotope ratio values traceable to the SI through calibration standards of known purity. A second calibration strategy based on converting the more typically determined delta values on the Vienna PeeDee Belemnite (VPDB) scale using literature values for the absolute carbon isotope ratio of VPDB itself was used for comparison. Both calibration approaches provided results consistent with those previously reported for the same natural glycine using MC-ICP-MS; absolute carbon ratios of 10,649 × 10(-6) with an expanded uncertainty (k = 2) of 24 × 10(-6) and 10,646 × 10(-6) with an expanded uncertainty (k = 2) of 88 × 10(-6) were obtained, respectively. The absolute carbon isotope ratio of the VPDB standard was found to be 11,115 × 10(-6) with an expanded uncertainty (k = 2) of 27 × 10(-6), which is in excellent agreement with previously published values.

  19. Uncertainty evaluation in the chloroquine phosphate potentiometric titration: application of three different approaches.

    PubMed

    Rodomonte, Andrea Luca; Montinaro, Annalisa; Bartolomei, Monica

    2006-09-11

    A measurement result cannot be properly interpreted if not accompanied by its uncertainty. Several methods to estimate uncertainty have been developed. From those methods three in particular were chosen in this work to estimate the uncertainty of the Eu. Ph. chloroquine phosphate assay, a potentiometric titration commonly used in medicinal control laboratories. The famous error-budget approach (also called bottom-up or step-by-step) described by the ISO Guide to the expression of Uncertainty in Measurement (GUM) was the first method chosen. It is based on the combination of uncertainty contributions that have to be directly derived from the measurement process. The second method employed was the Analytical Method Committee top-down which estimates uncertainty through reproducibility obtained during inter-laboratory studies. Data for its application were collected in a proficiency testing study carried out by over 50 laboratories throughout Europe. The last method chosen was the one proposed by Barwick and Ellison. It uses a combination of precision, trueness and ruggedness data to estimate uncertainty. These data were collected from a validation process specifically designed for uncertainty estimation. All the three approaches presented a distinctive set of advantages and drawbacks in their implementation. An expanded uncertainty of about 1% was assessed for the assay investigated.

  20. Theoretical uncertainties in the calculation of supersymmetric dark matter observables

    NASA Astrophysics Data System (ADS)

    Bergeron, Paul; Sandick, Pearl; Sinha, Kuver

    2018-05-01

    We estimate the current theoretical uncertainty in supersymmetric dark matter predictions by comparing several state-of-the-art calculations within the minimal supersymmetric standard model (MSSM). We consider standard neutralino dark matter scenarios — coannihilation, well-tempering, pseudoscalar resonance — and benchmark models both in the pMSSM framework and in frameworks with Grand Unified Theory (GUT)-scale unification of supersymmetric mass parameters. The pipelines we consider are constructed from the publicly available software packages SOFTSUSY, SPheno, FeynHiggs, SusyHD, micrOMEGAs, and DarkSUSY. We find that the theoretical uncertainty in the relic density as calculated by different pipelines, in general, far exceeds the statistical errors reported by the Planck collaboration. In GUT models, in particular, the relative discrepancies in the results reported by different pipelines can be as much as a few orders of magnitude. We find that these discrepancies are especially pronounced for cases where the dark matter physics relies critically on calculations related to electroweak symmetry breaking, which we investigate in detail, and for coannihilation models, where there is heightened sensitivity to the sparticle spectrum. The dark matter annihilation cross section today and the scattering cross section with nuclei also suffer appreciable theoretical uncertainties, which, as experiments reach the relevant sensitivities, could lead to uncertainty in conclusions regarding the viability or exclusion of particular models.

  1. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  2. Method to Calculate Uncertainty Estimate of Measuring Shortwave Solar Irradiance using Thermopile and Semiconductor Solar Radiometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, I.

    2011-07-01

    The uncertainty of measuring solar irradiance is fundamentally important for solar energy and atmospheric science applications. Without an uncertainty statement, the quality of a result, model, or testing method cannot be quantified, the chain of traceability is broken, and confidence cannot be maintained in the measurement. Measurement results are incomplete and meaningless without a statement of the estimated uncertainty with traceability to the International System of Units (SI) or to another internationally recognized standard. This report explains how to use International Guidelines of Uncertainty in Measurement (GUM) to calculate such uncertainty. The report also shows that without appropriate corrections tomore » solar measuring instruments (solar radiometers), the uncertainty of measuring shortwave solar irradiance can exceed 4% using present state-of-the-art pyranometers and 2.7% using present state-of-the-art pyrheliometers. Finally, the report demonstrates that by applying the appropriate corrections, uncertainties may be reduced by at least 50%. The uncertainties, with or without the appropriate corrections might not be compatible with the needs of solar energy and atmospheric science applications; yet, this report may shed some light on the sources of uncertainties and the means to reduce overall uncertainty in measuring solar irradiance.« less

  3. Uncertainty Propagation in OMFIT

    NASA Astrophysics Data System (ADS)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  4. Quantification of elemental area densities in multiple metal layers (Au/Ni/Cu) on a Cr-coated quartz glass substrate for certification of NMIJ CRM 5208-a.

    PubMed

    Ariga, Tomoko; Zhu, Yanbei; Ito, Mika; Takatsuka, Toshiko; Terauchi, Shinya; Kurokawa, Akira; Inagaki, Kazumi

    2018-04-01

    Area densities of Au/Ni/Cu layers on a Cr-coated quartz substrate were characterized to certify a multiple-metal-layer certified reference material (NMIJ CRM5208-a) that is intended for use in the analysis of the layer area density and the thickness by an X-ray fluorescence spectrometer. The area densities of Au/Ni/Cu layers were calculated from layer mass amounts and area. The layer mass amounts were determined by using wet chemical analyses, namely inductively coupled plasma mass spectrometry (ICP-MS), isotope-dilution (ID-) ICP-MS, and inductively coupled plasma optical emission spectrometry (ICP-OES) after dissolving the layers with diluted mixture of HCl and HNO 3 (1:1, v/v). Analytical results of the layer mass amounts obtained by the methods agreed well with each another within their uncertainty ranges. The area of the layer was determined by using a high-resolution optical scanner calibrated by Japan Calibration Service System (JCSS) standard scales. The property values of area density were 1.84 ± 0.05 μg/mm 2 for Au, 8.69 ± 0.17 μg/mm 2 for Ni, and 8.80 ± 0.14 μg/mm 2 for Cu (mean ± expanded uncertainty, coverage factor k = 2). In order to assess the reliability of these values, the density of each metal layer calculated from the property values of the area density and layer thickness measured by using a scanning electron microscope were compared with available literature values and good agreement between the observed values and values obtained in previous studies.

  5. Measuring past glacier fluctuations from historic photographs geolocated using Structure from Motion

    NASA Astrophysics Data System (ADS)

    Vargo, L.; Anderson, B.; Horgan, H. J.; Mackintosh, A.; Lorrey, A.; Thornton, M.

    2017-12-01

    Quantifying glacier fluctuations is important for understanding how the cryosphere responds to climate variability and change. Photographs of past ice extents have become iconic images of climate change, but until now incorporating these images into quantitative estimates of glacier change has been problematic. We present a new method to quantitatively measure past glacier fluctuations from historic images. The method uses a large set of modern geolocated photographs and Structure from Motion (SfM) to calculate the camera parameters for the historic images, including the location from which they were taken. We initially apply this method to a small maritime New Zealand glacier (Brewster Glacier, 44°S, 2 km2), and quantify annual equilibrium line altitudes (ELAs) and length changes from historic oblique aerial photographs (1981 - 2017). Results show that Brewster has retreated 364 ± 12 m since 1981 and, using independent field measurements of terminus positions (2005 - 2014), we show that this SfM-derived length record accurately captures glacier change. We calculate the uncertainties associated with this method using known coordinates of bedrock features surrounding the glacier. Mean uncertainties in the ELA and length records are 7 m and 11 m, respectively. In addition to Brewster, 49 other New Zealand glaciers have been monitored by aerial photographs since 1978. However, the length records for these glaciers only include years of relative advance or retreat, and no length changes have been quantified. We will ultimately apply this method to all 50 glaciers, expanding the database of New Zealand glacier fluctuations that until now included only a few glaciers. This method can be further applied to any glacier with historic images, and can be used to measure past changes in glacier width, area, and surface elevation in addition to ELA and length.

  6. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    NASA Technical Reports Server (NTRS)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  7. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  8. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  9. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  10. Uncertainty and Traceability for the CEESI Iowa Natural Gas Facility.

    PubMed

    Johnson, Aaron; Kegel, Tom

    2004-01-01

    This paper analyzes the uncertainty of a secondary flow measurement facility that calibrates a significant fraction of United States and foreign flow meters used for custody transfer of natural gas. The facility, owned by the Colorado Experimental Engineering Station Incorporated (CEESI), is located in Iowa. This facility measures flow with nine turbine meter standards, each of which is traceable to the NIST primary flow standard. The flow capacity of this facility ranges from 0.7 actual m(3)/s to 10.7 actual m(3)/s at nominal pressures of 7174 kPa and at ambient temperatures. Over this flow range the relative expanded flow uncertainty varies from 0.28 % to 0.30 % (depending on flow).

  11. Effects of radiobiological uncertainty on shield design for a 60-day lunar mission

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Nealy, John E.; Schimmerling, Walter

    1993-01-01

    Some consequences of uncertainties in radiobiological risk due to galactic cosmic ray exposure are analyzed to determine their effect on engineering designs for a first lunar outpost - a 60-day mission. Quantitative estimates of shield mass requirements as a function of a radiobiological uncertainty factor are given for a simplified vehicle structure. The additional shield mass required for compensation is calculated as a function of the uncertainty in galactic cosmic ray exposure, and this mass is found to be as large as a factor of 3 for a lunar transfer vehicle. The additional cost resulting from this mass is also calculated. These cost estimates are then used to exemplify the cost-effectiveness of research.

  12. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  13. Systematic uncertainties in the Monte Carlo calculation of ion chamber replacement correction factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O.

    In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study,more » and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.« less

  14. Modeling Radioactive Decay Chains with Branching Fraction Uncertainties

    DTIC Science & Technology

    2013-03-01

    moments methods with transmutation matrices. Uncertainty from both half-lives and branching fractions is carried through these calculations by Monte...moment methods, method for sampling from normal distributions for half- life uncertainty, and use of transmutation matrices were leveraged. This...distributions for half-life and branching fraction uncertainties, building decay chains and generating the transmutation matrix (T-matrix

  15. Development of a low frost-point generator operating at sub-atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Cuccaro, R.; Rosso, L.; Smorgon, D.; Beltramino, G.; Tabandeh, S.; Fernicola, V.

    2018-05-01

    A low frost-point generator (INRIM 03) operating at sub-atmospheric pressure has been designed and constructed at the Istituto Nazionale di Ricerca Metrologica (INRIM) as part of a calibration facility for upper-air sounding instruments. This new humidity generator covers the frost-point temperature range between  ‑99 °C and  ‑20 °C and works at any controlled pressure between 200 hPa and 1100 hPa, achieving a complete saturation of the carrier gas (nitrogen) in a single passage through a stainless steel isothermal saturator. The generated humid gas contains a water vapour amount fraction between 14  ×  10‑9 mol mol‑1 and 5  ×  10‑3 mol mol‑1. In this work the design of the generator is reported together with characterisation and performance evaluation tests. A preliminary validation of the INRIM 03 against one of the INRIM humidity standards in the common region is also included. Based on experimental test results, an initial uncertainty evaluation of the generated frost-point temperature, T fp, and water vapour amount fraction, x w, in the limited range down to  ‑75 °C at atmospheric pressure is reported. For the frost-point temperature, the uncertainty budget yields a total expanded uncertainty (k  =  2) of less than 0.028 °C, while for the mole fraction the budget yields a total expanded uncertainty of less than 10‑6 mol mol‑1.

  16. Using Websites to Convey Scientific Uncertainties for Volcanic Processes and Potential Hazards

    NASA Astrophysics Data System (ADS)

    Venezky, D. Y.; Lowenstern, J. B.; Hill, D. P.

    2005-12-01

    The Yellowstone Volcano Observatory (YVO) and Long Valley Observatory (LVO) websites have greatly increased the public's awareness and access to information about scientific uncertainties for volcanic processes by communicating at multiple levels of understanding and varied levels of detail. Our websites serve a broad audience ranging from visitors unaware of the calderas, to lay volcano enthusiasts, to scientists, federal agencies, and emergency managers. Both Yellowstone and Long Valley are highly visited tourist attractions with histories of caldera-forming eruptions large enough to alter global climate temporarily. Although it is much more likely that future activity would be on a small scale at either volcano, we are constantly posed questions about low-probability, high-impact events such as the caldera-forming eruption depicted in the recent BBC/Discovery movie, "Supervolcano". YVO and LVO website objectives include: providing monitoring data, explaining the likelihood of future events, summarizing research results, helping media provide reliable information, and expanding on information presented by the media. Providing detailed current information is a crucial website component as the public often searches online to augment information gained from often cryptic pronouncements by the media. In May 2005, for example, YVO saw an order of magnitude increase in page requests on the day MSNBC ran the misleading headline, "Yellowstone eruption threat high." The headline referred not to current events but a general rating of Yellowstone as one of 37 "high threat" volcanoes in the USGS National Volcano Early Warning System report. As websites become a more dominant source of information, we continuously revise our communication plans to make the most of this evolving medium. Because the internet gives equal access to all information providers, we find ourselves competing with various "doomsday" websites that sensationalize and distort the current understanding of natural systems. For example, many sites highlight a miscalculated repose period for caldera-forming eruptions at Yellowstone and conclude that a catastrophic eruption is overdue. Recent revisions on the YVO website have discussed how intervals are calculated and why the commonly quoted values are incorrect. Our aim is to reduce confusion by providing clear, simple explanations that highlight the process by which scientists reach conclusions and calculate associated uncertainties.

  17. Recent improvements of the French liquid micro-flow reference facility

    NASA Astrophysics Data System (ADS)

    Florestan, Ogheard; Sandy, Margot; Julien, Savary

    2018-02-01

    According to the mission of the national reference laboratory, LNE-CETIAT achieved in 2012 the construction and accreditation of a modern and innovative calibration laboratory based on the gravimetric method. The measurement capabilities cover a flow rate range for liquid from 10 kg · h-1 down to 1 g · h-1 with expanded relative uncertainties from 0.1% to 0.6% (k  =  2). Since 2012, several theoretical and experimental studies have allowed a better knowledge and control over uncertainty sources and have decreased calibration time. When dealing with liquid micro-flow using a reference method such as the gravimetric method, several difficulties have to be overcome. The main improvements described in this paper relate to the enhancement of the evaporation trap system, the merging of the four dedicated measurement lines into one, and the implementation of a gravimetric dynamic ‘flying’ method for the calculation of the reference flow rate. The evaporation-avoiding system has been replaced by an oil layer in order to remove the possibility of condensation of water on both the weighed vessel and the immersed capillary. The article describes the experimental method used to quantify the effect of surface tension of water/oil/air interfaces on the weighed mass. The traditional static gravimetric method has been upgraded by a dynamic ‘flying’ gravimetric method. The article presents the newly implemented method, its validation and its advantages compared to the static method. The four dedicated weighing devices, dispatched over four sub-ranges of flow rate, have been merged leading to the use of only one weighing scale with the same uncertainties on the reference flow rate. The article discusses the new uncertainty budget over the full flow rate range capability. Finally, the article discusses the improvements still under development and the general prospects of liquid micro-flow metrology.

  18. Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix

    DOE PAGES

    Pigni, M. T.; Croft, S.; Gauld, I. C.

    2016-04-25

    Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less

  19. Radiological assessment. A textbook on environmental dose analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Till, J.E.; Meyer, H.R.

    1983-09-01

    Radiological assessment is the quantitative process of estimating the consequences to humans resulting from the release of radionuclides to the biosphere. It is a multidisciplinary subject requiring the expertise of a number of individuals in order to predict source terms, describe environmental transport, calculate internal and external dose, and extrapolate dose to health effects. Up to this time there has been available no comprehensive book describing, on a uniform and comprehensive level, the techniques and models used in radiological assessment. Radiological Assessment is based on material presented at the 1980 Health Physics Society Summer School held in Seattle, Washington. Themore » material has been expanded and edited to make it comprehensive in scope and useful as a text. Topics covered include (1) source terms for nuclear facilities and Medical and Industrial sites; (2) transport of radionuclides in the atmosphere; (3) transport of radionuclides in surface waters; (4) transport of radionuclides in groundwater; (5) terrestrial and aquatic food chain pathways; (6) reference man; a system for internal dose calculations; (7) internal dosimetry; (8) external dosimetry; (9) models for special-case radionuclides; (10) calculation of health effects in irradiated populations; (11) evaluation of uncertainties in environmental radiological assessment models; (12) regulatory standards for environmental releases of radionuclides; (13) development of computer codes for radiological assessment; and (14) assessment of accidental releases of radionuclides.« less

  20. AN IMPROVEMENT TO THE MOUSE COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM

    EPA Science Inventory

    The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cast or risk analysis equations. It was especially intended for use by individuals with l...

  1. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  2. Nuclear event zero-time calculation and uncertainty evaluation.

    PubMed

    Pan, Pujing; Ungar, R Kurt

    2012-04-01

    It is important to know the initial time, or zero-time, of a nuclear event such as a nuclear weapon's test, a nuclear power plant accident or a nuclear terrorist attack (e.g. with an improvised nuclear device, IND). Together with relevant meteorological information, the calculated zero-time is used to help locate the origin of a nuclear event. The zero-time of a nuclear event can be derived from measured activity ratios of two nuclides. The calculated zero-time of a nuclear event would not be complete without an appropriately evaluated uncertainty term. In this paper, analytical equations for zero-time and the associated uncertainty calculations are derived using a measured activity ratio of two nuclides. Application of the derived equations is illustrated in a realistic example using data from the last Chinese thermonuclear test in 1980. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  3. SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devic, S; Tomic, N; DeBlois, F

    2016-06-15

    Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less

  4. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    NASA Astrophysics Data System (ADS)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  5. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  6. Flow Rates Measurement and Uncertainty Analysis in Multiple-Zone Water-Injection Wells from Fluid Temperature Profiles

    PubMed Central

    Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.

    2016-01-01

    This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068

  7. Uncertainty Analysis in Humidity Measurements by the Psychrometer Method

    PubMed Central

    Chen, Jiunyuan; Chen, Chiachung

    2017-01-01

    The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15–50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty. PMID:28216599

  8. Uncertainty Analysis in Humidity Measurements by the Psychrometer Method.

    PubMed

    Chen, Jiunyuan; Chen, Chiachung

    2017-02-14

    The most common and cheap indirect technique to measure relative humidity is by using psychrometer based on a dry and a wet temperature sensor. In this study, the measurement uncertainty of relative humidity was evaluated by this indirect method with some empirical equations for calculating relative humidity. Among the six equations tested, the Penman equation had the best predictive ability for the dry bulb temperature range of 15-50 °C. At a fixed dry bulb temperature, an increase in the wet bulb depression increased the error. A new equation for the psychrometer constant was established by regression analysis. This equation can be computed by using a calculator. The average predictive error of relative humidity was <0.1% by this new equation. The measurement uncertainty of the relative humidity affected by the accuracy of dry and wet bulb temperature and the numeric values of measurement uncertainty were evaluated for various conditions. The uncertainty of wet bulb temperature was the main factor on the RH measurement uncertainty.

  9. Uncertainties of Mayak urine data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less

  10. From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series.

    PubMed

    Métadier, M; Bertrand-Krajewski, J-L

    2011-01-01

    With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.

  11. Towards traceability in CO2 line strength measurements by TDLAS at 2.7 µm

    NASA Astrophysics Data System (ADS)

    Pogány, Andrea; Ott, Oliver; Werhahn, Olav; Ebert, Volker

    2013-11-01

    Direct tunable diode laser absorption spectroscopy (TDLAS) was combined in this study with metrological principles on the determination of uncertainties to measure the line strengths of the P36e and P34e line of 12C16O2 in the ν1+ν3 band at 2.7 μm. Special emphasis was put on traceability and a concise, well-documented uncertainty assessment. We have quantitatively analyzed the uncertainty contributions of different experimental parameters to the uncertainty of the line strength. Establishment of the wavenumber axis and the gas handling procedure proved to be the two major contributors to the final uncertainty. The obtained line strengths at 296 K are 1.593×10-20 cm/molecule for the P36e and 1.981×10-20 cm/molecule for the P34e line, with relative expanded uncertainties of 1.1% and 1.3%, respectively (k=2, corresponding to a 95% confidence level). The measured line strength values are in agreement with literature data (line strengths listed in the HITRAN and GEISA databases), but show an uncertainty, which is at least a factor of 2 lower.

  12. Wildfire Decision Making Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Thompson, M.

    2013-12-01

    Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.

  13. Uncertainty in predictions of oil spill trajectories in a coastal zone

    NASA Astrophysics Data System (ADS)

    Sebastião, P.; Guedes Soares, C.

    2006-12-01

    A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.

  14. New Formulation for the Viscosity of n-Butane

    NASA Astrophysics Data System (ADS)

    Herrmann, Sebastian; Vogel, Eckhard

    2018-03-01

    A new viscosity formulation for n-butane, based on the residual quantity concept, uses the reference equation of state by Bücker and Wagner [J. Phys. Chem. Ref. Data 35, 929 (2006)] and is valid in the fluid region from the triple point to 650 K and to 100 MPa. The contributions for the zero-density viscosity and for the initial-density dependence were separately developed, whereas those for the critical enhancement and for the higher-density terms were pretreated. All contributions were given as a function of the reciprocal reduced temperature τ, while the last two contributions were correlated as a function of τ and of the reduced density δ. The different contributions were based on specific primary data sets, whose evaluation and choice were discussed in detail. The final formulation incorporates 13 coefficients derived employing a state-of-the-art linear optimization algorithm. The viscosity at low pressures p ≤ 0.2 MPa is described with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 293 ≤ T/K ≤ 626. The expanded uncertainty in the vapor phase at subcritical temperatures T ≥ 298 K as well as in the supercritical thermodynamic region T ≤ 448 K at pressures p ≤ 30 MPa is estimated to be 1.5%. It is raised to 4.0% in regions where only less reliable primary data sets are available and to 6.0% in ranges without any primary data, but in which the equation of state is valid. A weakness of the reference equation of state in the near-critical region prevents estimation of the expanded uncertainty in this region. Viscosity tables for the new formulation are presented in Appendix B for the single-phase region, for the vapor-liquid phase boundary, and for the near-critical region.

  15. New Formulation for the Viscosity of Propane

    NASA Astrophysics Data System (ADS)

    Vogel, Eckhard; Herrmann, Sebastian

    2016-12-01

    A new viscosity formulation for propane, using the reference equation of state for its thermodynamic properties by Lemmon et al. [J. Chem. Eng. Data 54, 3141 (2009)] and valid in the fluid region from the triple-point temperature to 650 K and pressures up to 100 MPa, is presented. At the beginning, a zero-density contribution and one for the critical enhancement, each based on the experimental data, were independently generated in parts. The higher-density contributions are correlated as a function of the reciprocal reduced temperature τ = Tc/T and of the reduced density δ = ρ/ρc (Tc—critical temperature, ρc—critical density). The final formulation includes 17 coefficients inferred by applying a state-of-the-art linear optimization algorithm. The evaluation and choice of the primary data sets are detailed due to its importance. The viscosity at low pressures p ≤ 0.2 MPa is represented with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 273 ≤ T/K ≤ 625. The expanded uncertainty in the vapor phase at subcritical temperatures T ≥ 273 K as well as in the supercritical thermodynamic region T ≤ 423 K at pressures p ≤ 30 MPa is assumed to be 1.5%. In the near-critical region (1.001 < 1/τ < 1.010 and 0.8 < δ < 1.2), the expanded uncertainty increases with decreasing temperature up to 3.0%. It is further increased to 4.0% in regions of less reliable primary data sets and to 6.0% in ranges in which no primary data are available but the equation of state is valid. Tables of viscosity computed for the new formulation are given in an Appendix for the single-phase region, for the vapor-liquid phase boundary, and for the near-critical region.

  16. Development of accurate dimethyl sulphide primary standard gas mixtures at low nanomole per mole levels in high-pressure aluminium cylinders for ambient measurements

    NASA Astrophysics Data System (ADS)

    Eon Kim, Mi; Kang, Ji Hwan; Doo Kim, Yong; Lee, Dong Soo; Lee, Sangil

    2018-04-01

    Dimethyl sulphide (DMS) plays an important role in atmospheric chemistry and climate change. Ambient DMS is monitored in a global network and reported at sub-nanomole per mole (nmol/mol) levels. Developing traceable, accurate DMS standards at ambient levels is essential for tracking the long-term trends and understanding the role of DMS in the atmosphere. Gravimetrically prepared gas standards in cylinders are widely used for calibrating instruments. Therefore, a stable primary standard gas mixture (PSM) is required for traceable ambient DMS measurement at remote sites. In this study, to evaluate adsorption loss on the internal surface of the gas cylinder, 6 nmol mol-1 DMS gas mixtures were prepared in three types of aluminium cylinders: a cylinder without a special coating on its internal surface (AL), an Aculife IV  +  III-treated cylinder (AC), and an Experis-treated cylinder (EX). There was little adsorption loss on the EX cylinder, whereas there was substantial adsorption loss on the other two cylinders. The EX cylinder was used to prepare 0.5, 2, 5, and 7 nmol mol-1 DMS PSMs with relative expanded uncertainties of less than 0.4%. The DMS PSMs were analytically verified and consistent within a relative expanded uncertainty of less than 1.2%. The long-term stability of the 7 nmol mol-1 DMS PSM was assessed by tracking the ratio of the DMS to the internal standard, benzene. The results showed that the DMS was stable for about seven months and it was projected to be stable for more than 60 months within a relative expanded uncertainty of 3%.

  17. Final report on supplementary comparison APMP.M.P-S7.TRI in hydraulic gauge pressure from 40 MPa to 200 MPa

    NASA Astrophysics Data System (ADS)

    Kobata, Tokihiko; Olson, Douglas A.; Eltawil, Alaaeldin A.

    2017-01-01

    This report describes the results of a supplementary comparison of hydraulic high-pressure standards at three national metrology institutes (NMIs); National Metrology Institute of Japan, AIST (NMIJ/AIST), National Institute of Standards and Technology (NIST), USA and National Institute for Standards (NIS), Egypt, which was carried out at NIST during the period May 2001 to September 2001 within the framework of the Asia-Pacific Metrology Programme (APMP) in order to evaluate their degrees of equivalence at pressures in the range 40 MPa to 200 MPa for gauge mode. The pilot institute was NMIJ/AIST. Three working pressure standards from the institutes, in the form of piston-cylinder assemblies, were used for the comparison. The comparison and calculation methods used are discussed in this report. From the cross-float measurements, the differences between the working pressure standards of each institute were examined through an evaluation of the effective area of each piston-cylinder assembly with its uncertainty. From the comparison results, it was revealed that the values claimed by the participating institutes, NMIJ, NIST, and NIS, agree within the expanded (k = 2) uncertainties. The hydraulic pressure standards in the range 40 MPa to 200 MPa for gauge mode of the three participating NMIs were found to be equivalent within their claimed uncertainties. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  18. Infrared spectral normal emittance/emissivity comparison

    NASA Astrophysics Data System (ADS)

    Hanssen, L.; Wilthan, B.; Filtz, J.-R.; Hameury, J.; Girard, F.; Battuello, M.; Ishii, J.; Hollandt, J.; Monte, C.

    2016-01-01

    The National Measurement Institutes (NMIs) of the United States, Germany, France, Italy and Japan, have joined in an inter-laboratory comparison of their infrared spectral emittance scales. This action is part of a series of supplementary inter-laboratory comparisons (including thermal conductivity and thermal diffusivity) sponsored by the Consultative Committee on Thermometry (CCT) Task Group on Thermophysical Quantities (TG-ThQ). The objective of this collaborative work is to strengthen the major operative National Measurement Institutes' infrared spectral emittance scales and consequently the consistency of radiative properties measurements carried out worldwide. The comparison has been performed over a spectral range of 2 μm to 14 μm, and a temperature range from 23 °C to 800 °C. Artefacts included in the comparison are potential standards: oxidized Inconel, boron nitride, and silicon carbide. The measurement instrumentation and techniques used for emittance scales are unique for each NMI, including the temperature ranges covered as well as the artefact sizes required. For example, all three common types of spectral instruments are represented: dispersive grating monochromator, Fourier transform and filter-based spectrometers. More than 2000 data points (combinations of material, wavelength and temperature) were compared. Ninety-eight percent (98%) of the data points were in agreement, with differences to weighted mean values less than the expanded uncertainties calculated from the individual NMI uncertainties and uncertainties related to the comparison process. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  19. Random uncertainty of photometric determination of hemolysis index on the Abbott Architect c16000 platform.

    PubMed

    Aloisio, Elena; Carnevale, Assunta; Pasqualetti, Sara; Birindelli, Sarah; Dolci, Alberto; Panteghini, Mauro

    2018-01-16

    Automatic photometric determination of the hemolysis index (HI) on serum and plasma samples is central to detect potential interferences of in vitro hemolysis on laboratory tests. When HI is above an established cut-off for interference, results may suffer from a significant bias and undermine clinical reliability of the test. Despite its undeniable importance for patient safety, the analytical performance of HI estimation is not usually checked in laboratories. Here we evaluated for the first time the random source of measurement uncertainty of HI determination on the two Abbott Architect c16000 platforms in use in our laboratory. From January 2016 to September 2017, we collected data from daily photometric determination of HI on a fresh-frozen serum pool with a predetermined HI value of ~100 (corresponding to ~1g/L of free hemoglobin). Monthly and cumulative CVs were calculated. During 21months, 442 and 451 measurements were performed on the two platforms, respectively. Monthly CVs ranged from 0.7% to 2.7% on c16000-1 and from 0.8% to 2.5% on c16000-2, with a between-platform cumulative CV of 1.82% (corresponding to an expanded uncertainty of 3.64%). Mean HI values on the two platforms were just slightly biased (101.3 vs. 103.1, 1.76%), but, due to the high precision of measurements, this difference assumed statistical significance (p<0.0001). Even though no quality specifications are available to date, our study shows that the HI measurement on Architect c16000 platform has nice reproducibility that could be considered in establishing the state of the art of the measurement. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. A reassessment of absolute energies of the x-ray L lines of lanthanide metals

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.; Alpert, B. K.; Bennett, D. A.; Doriese, W. B.; Gard, J. D.; Hilton, G. C.; Hudson, L. T.; Joe, Y.-I.; Morgan, K. M.; O'Neil, G. C.; Reintsema, C. D.; Schmidt, D. R.; Swetz, D. S.; Szabo, C. I.; Ullom, J. N.

    2017-08-01

    We introduce a new technique for determining x-ray fluorescence line energies and widths, and we present measurements made with this technique of 22 x-ray L lines from lanthanide-series elements. The technique uses arrays of transition-edge sensors, microcalorimeters with high energy-resolving power that simultaneously observe both calibrated x-ray standards and the x-ray emission lines under study. The uncertainty in absolute line energies is generally less than 0.4 eV in the energy range of 4.5 keV to 7.5 keV. Of the seventeen line energies of neodymium, samarium, and holmium, thirteen are found to be consistent with the available x-ray reference data measured after 1990; only two of the four lines for which reference data predate 1980, however, are consistent with our results. Five lines of terbium are measured with uncertainties that improve on those of existing data by factors of two or more. These results eliminate a significant discrepancy between measured and calculated x-ray line energies for the terbium L l line (5.551 keV). The line widths are also measured, with uncertainties of 0.6 eV or less on the full-width at half-maximum in most cases. These measurements were made with an array of approximately one hundred superconducting x-ray microcalorimeters, each sensitive to an energy band from 1 keV to 8 keV. No energy-dispersive spectrometer has previously been used for absolute-energy estimation at this level of accuracy. Future spectrometers, with superior linearity and energy resolution, will allow us to improve on these results and expand the measurements to more elements and a wider range of line energies.

  1. Contribution to the certification of B, Cd, Cu, Mg and Pb in a synthetic water sample, by use of isotope-dilution ICP-MS, for Comparison 12 of the International Measurement Evaluation Programme.

    PubMed

    Diemer, J; Quétel, C R; Taylor, P D P

    2002-09-01

    The contribution of the Institute for Reference Materials and Measurements to the certification of the B, Cd, Cu, Mg, and Pb content of a synthetic water sample used in Comparison 12 of the International Measurement Evaluation Programme (IMEP-12) is described. The aim of the IMEP programme is to demonstrate objectively the degree of equivalence and quality of chemical measurements of individual laboratories on the international scene by comparing them with reference ranges traceable to the SI (Système International d'Unités). IMEP is organized in support of European Union policies and helps to improve the traceability of values produced by field chemical measurement laboratories. The analytical procedure used to establish the reference values for the B, Cd, Cu, Mg, and Pb content of the IMEP-12 sample is based on inductively coupled plasma-isotope-dilution mass spectrometry (ICP-IDMS) applied as a primary method of measurement. The measurements performed for the IMEP-12 study are described in detail. Focus is on the element boron, which is particularly difficult to analyze by ICP-MS because of potential problems of low sensitivity, high mass discrimination, memory effects, and abundance sensitivity. For each of the certified amount contents presented here a total uncertainty budget was calculated using the method of propagation of uncertainties according to ISO (International Organization for Standardization) and Eurachem guidelines. For all investigated elements with concentrations in the low micro g kg(-1) and mg kg(-1) range (corresponding to pmol kg(-1) to the high micro mol kg(-1) level), SI-traceable reference values with relative expanded uncertainties ( k=2) of less than 2 % were obtained.

  2. Calculating surface ocean pCO2 from biogeochemical Argo floats equipped with pH: An uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Williams, N. L.; Juranek, L. W.; Feely, R. A.; Johnson, K. S.; Sarmiento, J. L.; Talley, L. D.; Dickson, A. G.; Gray, A. R.; Wanninkhof, R.; Russell, J. L.; Riser, S. C.; Takeshita, Y.

    2017-03-01

    More than 74 biogeochemical profiling floats that measure water column pH, oxygen, nitrate, fluorescence, and backscattering at 10 day intervals have been deployed throughout the Southern Ocean. Calculating the surface ocean partial pressure of carbon dioxide (pCO2sw) from float pH has uncertainty contributions from the pH sensor, the alkalinity estimate, and carbonate system equilibrium constants, resulting in a relative standard uncertainty in pCO2sw of 2.7% (or 11 µatm at pCO2sw of 400 µatm). The calculated pCO2sw from several floats spanning a range of oceanographic regimes are compared to existing climatologies. In some locations, such as the subantarctic zone, the float data closely match the climatologies, but in the polar Antarctic zone significantly higher pCO2sw are calculated in the wintertime implying a greater air-sea CO2 efflux estimate. Our results based on four representative floats suggest that despite their uncertainty relative to direct measurements, the float data can be used to improve estimates for air-sea carbon flux, as well as to increase knowledge of spatial, seasonal, and interannual variability in this flux.

  3. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  4. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  5. Accounting for uncertainty in DNA sequencing data.

    PubMed

    O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J

    2015-02-01

    Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Interval-parameter semi-infinite fuzzy-stochastic mixed-integer programming approach for environmental management under multiple uncertainties.

    PubMed

    Guo, P; Huang, G H

    2010-03-01

    In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    NASA Astrophysics Data System (ADS)

    Holmes, Jesse Curtis

    Nuclear data libraries provide fundamental reaction information required by nuclear system simulation codes. The inclusion of data covariances in these libraries allows the user to assess uncertainties in system response parameters as a function of uncertainties in the nuclear data. Formats and procedures are currently established for representing covariances for various types of reaction data in ENDF libraries. This covariance data is typically generated utilizing experimental measurements and empirical models, consistent with the method of parent data production. However, ENDF File 7 thermal neutron scattering library data is, by convention, produced theoretically through fundamental scattering physics model calculations. Currently, there is no published covariance data for ENDF File 7 thermal libraries. Furthermore, no accepted methodology exists for quantifying or representing uncertainty information associated with this thermal library data. The quality of thermal neutron inelastic scattering cross section data can be of high importance in reactor analysis and criticality safety applications. These cross sections depend on the material's structure and dynamics. The double-differential scattering law, S(alpha, beta), tabulated in ENDF File 7 libraries contains this information. For crystalline solids, S(alpha, beta) is primarily a function of the material's phonon density of states (DOS). Published ENDF File 7 libraries are commonly produced by calculation and processing codes, such as the LEAPR module of NJOY, which utilize the phonon DOS as the fundamental input for inelastic scattering calculations to directly output an S(alpha, beta) matrix. To determine covariances for the S(alpha, beta) data generated by this process, information about uncertainties in the DOS is required. The phonon DOS may be viewed as a probability density function of atomic vibrational energy states that exist in a material. Probable variation in the shape of this spectrum may be established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  8. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  9. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  10. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  11. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...

  12. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  13. Estimation Of TMDLs And Margin Of Safety Under Conditions Of Uncertainty

    EPA Science Inventory

    In TMDL development, an adequate margin of safety (MOS) is required in the calculation process to provide a cushion needed because of uncertainties in the data and analysis. Current practices, however, rarely factor analysis' uncertainty in TMDL development and the MOS is largel...

  14. Applications of explicitly-incorporated/post-processing measurement uncertainty in watershed modeling

    USDA-ARS?s Scientific Manuscript database

    The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...

  15. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less

  16. Optimization and resilience in natural resources management

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2015-01-01

    We consider the putative tradeoff between optimization and resilience in the management of natural resources, using a framework that incorporates different sources of uncertainty that are common in natural resources management. We address one-time decisions, and then expand the decision context to the more complex problem of iterative decision making. For both cases we focus on two key sources of uncertainty: partial observability of system state and uncertainty as to system dynamics. Optimal management strategies will vary considerably depending on the timeframe being considered and the amount and quality of information that is available to characterize system features and project the consequences of potential decisions. But in all cases an optimal decision making framework, if properly identified and focused, can be useful in recognizing sound decisions. We argue that under the conditions of deep uncertainty that characterize many resource systems, an optimal decision process that focuses on robustness does not automatically induce a loss of resilience.

  17. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.

  18. Vector-Boson Fusion Higgs Production at Three Loops in QCD.

    PubMed

    Dreyer, Frédéric A; Karlberg, Alexander

    2016-08-12

    We calculate the next-to-next-to-next-to-leading-order (N^{3}LO) QCD corrections to inclusive vector-boson fusion Higgs production at proton colliders, in the limit in which there is no color exchange between the hadronic systems associated with the two colliding protons. We also provide differential cross sections for the Higgs transverse momentum and rapidity distributions. We find that the corrections are at the 1‰-2‰ level, well within the scale uncertainty of the next-to-next-to-leading-order calculation. The associated scale uncertainty of the N^{3}LO calculation is typically found to be below the 2‰ level. We also consider theoretical uncertainties due to missing higher order parton distribution functions, and provide an estimate of their importance.

  19. Calculations of Nuclear Astrophysics and Californium Fission Neutron Spectrum Averaged Cross Section Uncertainties Using ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-fidelity Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.gov

    Nuclear astrophysics and californium fission neutron spectrum averaged cross sections and their uncertainties for ENDF materials have been calculated. Absolute values were deduced with Maxwellian and Mannhart spectra, while uncertainties are based on ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-Fidelity covariances. These quantities are compared with available data, independent benchmarks, EXFOR library, and analyzed for a wide range of cases. Recommendations for neutron cross section covariances are given and implications are discussed.

  20. Certification by the Karl Fischer method of the water content in SRM 2890, Water Saturated 1-Octanol, and the analysis of associated interlaboratory bias in the measurement process.

    PubMed

    Margolis, S A; Levenson, M

    2000-05-01

    The calibration of Karl Fischer instruments and reagents and the compensation for instrumental bias are essential to the accurate measurement of trace levels of water in organic and inorganic chemicals. A stable, nonhygroscopic standard, Water Saturated Octanol, which is compatible with the Karl Fischer reagents, has been prepared. This material, Standard Reference Material (SRM) 2890, is homogeneous and is certified to contain 39.24 +/- 0.85 mg water/mL (expanded uncertainty) of solution (47.3 +/- 1.0 mg water/g solution, expanded uncertainty) at 21.5 degrees C. The solubility of water in -octanol has been shown to be nearly constant between 10 degrees C and 30 degrees C (i.e., within 1% of the value at 21.5 degrees C). The results of an interlaboratory comparison exercise illustrate the utility of SRM 2890 in assessing the accuracy and bias of Karl Fischer instruments and measurements.

  1. Verification of micro-scale photogrammetry for smooth three-dimensional object measurement

    NASA Astrophysics Data System (ADS)

    Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard

    2017-05-01

    By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.

  2. Calculating Remote Sensing Reflectance Uncertainties Using an Instrument Model Propagated Through Atmospheric Correction via Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Karakoylu, E.; Franz, B.

    2016-01-01

    First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.

  3. Quantifying Uncertainties in Mass-Dimensional Relationships Through a Comparison Between CloudSat and SPartICus Reflectivity Factors

    NASA Astrophysics Data System (ADS)

    Mascio, J.; Mace, G. G.

    2015-12-01

    CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.

  4. Uncertainties Associated with Theoretically Calculated N2-Broadened Half-Widths of H2O Lines

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.; Gamache, R. R.

    2010-01-01

    With different choices of the cut-offs used in theoretical calculations, we have carried out extensive numerical calculations of the N2-broadend Lorentzian half-widths of the H2O lines using the modified Robert-Bonamy formalism. Based on these results, we are able to thoroughly check for convergence. We find that, with the low-order cut-offs commonly used in the literature, one is able to obtain converged values only for lines with large half-widths. Conversely, for lines with small half-widths, much higher cut-offs are necessary to guarantee convergence. We also analyse the uncertainties associated with calculated half-widths, and these are correlated as above. In general, the smaller the half-widths, the poorer the convergence and the larger the uncertainty associated with them. For convenience, one can divide all H2O lines into three categories, large, intermediate, and small, according to their half-width values. One can use this division to judge whether the calculated half-widths are converged or not, based on the cut-offs used, and also to estimate how large their uncertainties are. We conclude that with the current Robert- Bonamy formalism, for lines in category lone can achieve the accuracy requirement set by HITRAN, whereas for lines in category 3, it 'is impossible to meet this goal.

  5. Uncertainties in Climatological Seawater Density Calculations

    NASA Astrophysics Data System (ADS)

    Dai, Hao; Zhang, Xining

    2018-03-01

    In most applications, with seawater conductivity, temperature, and pressure data measured in situ by various observation instruments e.g., Conductivity-Temperature-Depth instruments (CTD), the density which has strong ties to ocean dynamics and so on is computed according to equations of state for seawater. This paper, based on density computational formulae in the Thermodynamic Equation of Seawater 2010 (TEOS-10), follows the Guide of the expression of Uncertainty in Measurement (GUM) and assesses the main sources of uncertainties. By virtue of climatological decades-average temperature/Practical Salinity/pressure data sets in the global ocean provided by the National Oceanic and Atmospheric Administration (NOAA), correlation coefficients between uncertainty sources are determined and the combined standard uncertainties uc>(ρ>) in seawater density calculations are evaluated. For grid points in the world ocean with 0.25° resolution, the standard deviations of uc>(ρ>) in vertical profiles cover the magnitude order of 10-4 kg m-3. The uc>(ρ>) means in vertical profiles of the Baltic Sea are about 0.028kg m-3 due to the larger scatter of Absolute Salinity anomaly. The distribution of the uc>(ρ>) means in vertical profiles of the world ocean except for the Baltic Sea, which covers the range of >(0.004,0.01>) kg m-3, is related to the correlation coefficient r>(SA,p>) between Absolute Salinity SA and pressure p. The results in the paper are based on sensors' measuring uncertainties of high accuracy CTD. Larger uncertainties in density calculations may arise if connected with lower sensors' specifications. This work may provide valuable uncertainty information required for reliability considerations of ocean circulation and global climate models.

  6. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  7. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Z. J.; Wells, D.; Green, J.

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switchingmore » the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.« less

  9. Development, validation and accreditation of a method for the determination of Pb, Cd, Cu and As in seafood and fish feed samples.

    PubMed

    Psoma, A K; Pasias, I N; Rousis, N I; Barkonikos, K A; Thomaidis, N S

    2014-05-15

    A rapid, sensitive, accurate and precise method for the determination of Pb, Cd, As and Cu in seafood and fish feed samples by Simultaneous Electrothermal Atomic Absorption Spectrometry was developed in regard to Council Directive 333/2007EC and ISO/IEC 17025 (2005). Different approaches were investigated in order to shorten the analysis time, always taking into account the sensitivity. For method validation, precision (repeatability and reproducibility) and accuracy by addition recovery tests have been assessed as performance criteria. The expanded uncertainties based on the Eurachem/Citac Guidelines were calculated. The method was accredited by the Hellenic Accreditation System and it was applied for an 8 years study in seafood (n=202) and fish feeds (n=275) from the Greek market. The annual and seasonal variation of the elemental content and correlation among the elemental content in fish feeds and the respective fish samples were also accomplished. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. "The New Economic Reality"

    ERIC Educational Resources Information Center

    Stewart, Pearl

    2012-01-01

    Many historically Black business schools have taken a proactive stance during this period of economic uncertainty. Dr. Jessica Bailey, president of the HBCU Business Deans' Roundtable, which includes 52 of the 104 historically Black business schools, thinks the institutions are "expanding their missions" to place more emphasis on globalization,…

  11. Improved estimates of environmental copper release rates from antifouling products.

    PubMed

    Finnie, Alistair A

    2006-01-01

    The US Navy Dome method for measuring copper release rates from antifouling paint in-service on ships' hulls can be considered to be the most reliable indicator of environmental release rates. In this paper, the relationship between the apparent copper release rate and the environmental release rate is established for a number of antifouling coating types using data from a variety of available laboratory, field and calculation methods. Apart from a modified Dome method using panels, all laboratory, field and calculation methods significantly overestimate the environmental release rate of copper from antifouling coatings. The difference is greatest for self-polishing copolymer antifoulings (SPCs) and smallest for certain erodible/ablative antifoulings, where the ASTM/ISO standard and the CEPE calculation method are seen to typically overestimate environmental release rates by factors of about 10 and 4, respectively. Where ASTM/ISO or CEPE copper release rate data are used for environmental risk assessment or regulatory purposes, it is proposed that the release rate values should be divided by a correction factor to enable more reliable generic environmental risk assessments to be made. Using a conservative approach based on a realistic worst case and accounting for experimental uncertainty in the data that are currently available, proposed default correction factors for use with all paint types are 5.4 for the ASTM/ISO method and 2.9 for the CEPE calculation method. Further work is required to expand this data-set and refine the correction factors through correlation of laboratory measured and calculated copper release rates with the direct in situ environmental release rate for different antifouling paints under a range of environmental conditions.

  12. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    PubMed

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  13. Uncertainty quantification of effective nuclear interactions

    DOE PAGES

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    2016-03-02

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  14. Uncertainty quantification of effective nuclear interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  15. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or timing of cash flows are uncertain and are not fixed under § 436.14, Federal agencies may examine the impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order...

  16. Use of meteorological information in the risk analysis of a mixed wind farm and solar

    NASA Astrophysics Data System (ADS)

    Mengelkamp, H.-T.; Bendel, D.

    2010-09-01

    Use of meteorological information in the risk analysis of a mixed wind farm and solar power plant portfolio H.-T. Mengelkamp*,** , D. Bendel** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH The renewable energy industry has rapidly developed during the last two decades and so have the needs for high quality comprehensive meteorological services. It is, however, only recently that international financial institutions bundle wind farms and solar power plants and offer shares in these aggregate portfolios. The monetary value of a mixed wind farm and solar power plant portfolio is determined by legal and technical aspects, the expected annual energy production of each wind farm and solar power plant and the associated uncertainty of the energy yield estimation or the investment risk. Building an aggregate portfolio will reduce the overall uncertainty through diversification in contrast to the single wind farm/solar power plant energy yield uncertainty. This is similar to equity funds based on a variety of companies or products. Meteorological aspects contribute to the diversification in various ways. There is the uncertainty in the estimation of the expected long-term mean energy production of the wind and solar power plants. Different components of uncertainty have to be considered depending on whether the power plant is already in operation or in the planning phase. The uncertainty related to a wind farm in the planning phase comprises the methodology of the wind potential estimation and the uncertainty of the site specific wind turbine power curve as well as the uncertainty of the wind farm effect calculation. The uncertainty related to a solar power plant in the pre-operational phase comprises the uncertainty of the radiation data base and that of the performance curve. The long-term mean annual energy yield of operational wind farms and solar power plants is estimated on the basis of the actual energy production and it's relation to a climatologically stable long-term reference period. These components of uncertainty are of technical nature and based on subjective estimations rather than on a statistically sound data analysis. And then there is the temporal and spatial variability of the wind speed and radiation. Their influence on the overall risk is determined by the regional distribution of the power plants. These uncertainty components are calculated on the basis of wind speed observations and simulations and satellite derived radiation data. The respective volatility (temporal variability) is calculated from the site specific time series and the influence on the portfolio through regional correlation. For an exemplary portfolio comprising fourteen wind farms and eight solar power plants the annual mean energy production to be expected is calculated, the different components of uncertainty are estimated for each single wind farm and solar power plant and for the portfolio as a whole. The reduction in uncertainty (or risk) through bundling the wind farms and the solar power plants (the portfolio effect) is calculated by Markowitz' Modern Portfolio Theory. This theory is applied separately for the wind farm and the solar power plant bundle and for the combination of both. The combination of wind and photovoltaic assets clearly shows potential for a risk reduction. Even assets with a comparably low expected return can lead to a significant risk reduction depending on their individual characteristics.

  17. Equation of State for the Thermodynamic Properties of 1,1,2,2,3-Pentafluoropropane (R-245ca)

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Lemmon, Eric W.

    2016-03-01

    An equation of state for the calculation of the thermodynamic properties of 1,1,2,2,3-pentafluoropropane (R-245ca), which is a hydrofluorocarbon refrigerant, is presented. The equation of state (EOS) is expressed in terms of the Helmholtz energy as a function of temperature and density, and can calculate all thermodynamic properties through the use of derivatives of the Helmholtz energy. The equation is valid for all liquid, vapor, and supercritical states of the fluid, and is valid from the triple point to 450 K, with pressures up to 10 MPa. Comparisons to experimental data are given to verify the stated uncertainties in the EOS. The estimated uncertainty for density is 0.1 % in the liquid phase between 243 K and 373 K with pressures up to 6.5 MPa; the uncertainties increase outside this range, and are unknown. The uncertainty in vapor-phase speed of sound is 0.1 %. The uncertainty in vapor pressure is 0.2 % between 270 K and 393 K. The uncertainties in other regions and properties are unknown due to a lack of experimental data.

  18. Application of the JENDL-4.0 nuclear data set for uncertainty analysis of the prototype FBR Monju

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with uncertainty analysis of the Monju reactor using JENDL-4.0 and the ERANOS code 1. In 2010 the Japan Atomic Energy Agency - JAEA - released the JENDL-4.0 nuclear data set. This new evaluation contains improved values of cross-sections and emphasizes accurate covariance matrices. Also in 2010, JAEA restarted the sodium-cooled fast reactor prototype Monju after about 15 years of shutdown. The long shutdown time resulted in a build-up of {sup 241}Am by natural decay from the initially loaded Pu. As well as improved covariance matrices, JENDL-4.0 is announced to contain improved data for minor actinides 2. Themore » choice of Monju reactor as an application of the new evaluation seems then even more relevant. The uncertainty analysis requires the determination of sensitivity coefficients. The well-established ERANOS code was chosen because of its integrated modules that allow users to perform sensitivity and uncertainty analysis. A JENDL-4.0 cross-sections library is not available for ERANOS. Therefor a cross-sections library had to be made from the original ENDF files for the ECCO cell code (part of ERANOS). For confirmation of the newly made library, calculations of a benchmark core were performed. These calculations used the MZA and MZB benchmarks and showed consistent results with other libraries. Calculations for the Monju reactor were performed using hexagonal 3D geometry and PN transport theory. However, the ERANOS sensitivity modules cannot use the resulting fluxes, as these modules require finite differences based fluxes, obtained from RZ SN-transport or 3D diffusion calculations. The corresponding geometrical models have been made and the results verified with Monju restart experimental data 4. Uncertainty analysis was performed using the RZ model. JENDL-4.0 uncertainty analysis showed a significant reduction of the uncertainty related to the fission cross-section of Pu along with an increase of the uncertainty related to the capture cross-section of {sup 238}U compared with the previous JENDL-3.3 version. Covariance data recently added in JENDL-4.0 for {sup 241}Am appears to have a non-negligible contribution. (authors)« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B.; Mughabghab, S.F.

    We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present papermore » contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.« less

  20. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  1. Correlated uncertainties in Monte Carlo reaction rate calculations

    NASA Astrophysics Data System (ADS)

    Longland, Richard

    2017-07-01

    Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.

  2. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  3. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  4. Intergovernmental Panel on Climate Change (IPCC)\\, Working Group 1, 1994: Modelling Results Relating Future Atmospheric CO2 Concentrations to Industrial Emissions (DB1009)

    DOE Data Explorer

    Enting, I. G.; Wigley, M. L.; Heimann, M.

    1995-01-01

    This database contains the results of various projections of the relation between future CO2 concentrations and future industrial emissions. These projections were contributed by groups from a number of countries as part of the scientific assessment for the report, "Radiative Forcing of Climate Change" (1994), issued by Working Group 1 of the Intergovernmental Panel on Climate Change. There were three types of calculations: (1) forward projections, calculating the atmospheric CO2 concentrations resulting from specified emissions scenarios; (2) inverse calculations, determining the emission rates that would be required to achieve stabilization of CO2 concentrations via specified pathways; (3) impulse response function calculations, required for determining Global Warming Potentials. The projections were extrapolations of global carbon cycle models from pre-industrial times (starting at 1765) to 2100 or 2200 A.D. There were two aspects to the exercise: (1) an assessment of the uncertainty due to uncertainties regarding the current carbon budget, and (2) an assessment of the uncertainties arising from differences between models. To separate these effects, a set of standard conditions was used to explore inter-model differences and then a series of sensitivity studies was used to explore the consequences of current uncertainties in the carbon cycle.

  5. Preliminary evaluation of the dosimetric accuracy of cone-beam computed tomography for cases with respiratory motion

    NASA Astrophysics Data System (ADS)

    Kim, Dong Wook; Bae, Sunhyun; Chung, Weon Kuu; Lee, Yoonhee

    2014-04-01

    Cone-beam computed tomography (CBCT) images are currently used for patient positioning and adaptive dose calculation; however, the degree of CBCT uncertainty in cases of respiratory motion remains an interesting issue. This study evaluated the uncertainty of CBCT-based dose calculations for a moving target. Using a phantom, we estimated differences in the geometries and the Hounsfield units (HU) between CT and CBCT. The calculated dose distributions based on CT and CBCT images were also compared using a radiation treatment planning system, and the comparison included cases with respiratory motion. The geometrical uncertainties of the CT and the CBCT images were less than 0.15 cm. The HU differences between CT and CBCT images for standard-dose-head, high-quality-head, normal-pelvis, and low-dose-thorax modes were 31, 36, 23, and 33 HU, respectively. The gamma (3%, 0.3 cm)-dose distribution between CT and CBCT was greater than 1 in 99% of the area. The gamma-dose distribution between CT and CBCT during respiratory motion was also greater than 1 in 99% of the area. The uncertainty of the CBCT-based dose calculation was evaluated for cases with respiratory motion. In conclusion, image distortion due to motion did not significantly influence dosimetric parameters.

  6. Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching

    NASA Astrophysics Data System (ADS)

    Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest

    2017-09-01

    A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k  =  2) at 1 MHz and 0.5 part in 106 (k  =  2) at 100 kHz is within reach.

  7. Traceable Dynamic Calibration of Force Transducers by Primary Means

    PubMed Central

    Vlajic, Nicholas; Chijioke, Ako

    2018-01-01

    We describe an apparatus for traceable, dynamic calibration of force transducers using harmonic excitation, and report calibration measurements of force transducers using this apparatus. In this system, the force applied to the transducer is produced by the acceleration of an attached mass, and is determined according to Newton’s second law, F = ma. The acceleration is measured by primary means, using laser interferometry. The capabilities of this system are demonstrated by performing dynamic calibrations of two shear-web-type force transducers up to a frequency of 2 kHz, with an expanded uncertainty below 1.2 %. We give an accounting of all significant sources of uncertainty, including a detailed consideration of the effects of dynamic tilting (rocking), which is a leading source of uncertainty in such harmonic force calibration systems. PMID:29887643

  8. A Systems Approach to Expanding the Technology Readiness Level within Defense Acquisition

    DTIC Science & Technology

    2009-03-20

    Aeronautics and Space Administration’s (NASA) post-Apollo era as ontology for contracting support (Sadin, Povinelli & Rosen, 1989). In the last nine years...2002). On uncertainty, ambiguity, and complexity in project management. Management Science, 48(8), 1008-1023. Sadin, S. R., Povinelli , F. P

  9. Use of High-Throughput Cell-Based and Model Organism Assays for Understanding the Potential Toxicity of Engineered Nanomaterials

    EPA Science Inventory

    The rapidly expanding field of nanotechnology is introducing a large number and diversity of engineered nanomaterials into research and commerce with concordant uncertainty regarding the potential adverse health and ecological effects. With costs and time of traditional animal to...

  10. Cost-effective use of liquid nitrogen in cryogenic wind tunnels, phase 2

    NASA Technical Reports Server (NTRS)

    Mcintosh, Glen E.; Lombard, David S.; Leonard, Kenneth R.; Morhorst, Gerald D.

    1990-01-01

    Cryogenic seal tests were performed and Rulon A was selected for the subject nutating positive displacement expander. A four-chamber expander was designed and fabricated. A nitrogen reliquefier flow system was also designed and constructed for testing the cold expander. Initial tests were unsatisfactory because of high internal friction attributed to nutating Rulon inlet and outlet valve plates. Replacement of the nutating valves with cam-actuated poppet valves improved performance. However, no net nitrogen reliquefaction was achieved due to high internal friction. Computer software was developed for accurate calculation of nitrogen reliquefaction from a system such as that proposed. These calculations indicated that practical reliquefaction rates of 15 to 19 percent could be obtained. Due to mechanical problems, the nutating expander did not demonstrate its feasibility nor that of the system. It was concluded that redesign and testing of a smaller nutating expander was required to prove concept feasibility.

  11. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE PAGES

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho; ...

    2017-12-07

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  12. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  13. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  14. Uncertainty for calculating transport on Titan: A probabilistic description of bimolecular diffusion parameters

    NASA Astrophysics Data System (ADS)

    Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.

    2015-11-01

    Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.

  15. Out of the black box: expansion of a theory-based intervention to self-manage the uncertainty associated with active surveillance (AS) for prostate cancer.

    PubMed

    Kazer, Meredith Wallace; Bailey, Donald E; Whittemore, Robin

    2010-01-01

    Active surveillance (AS) (sometimes referred to as watchful waiting) is an alternative approach to managing low-risk forms of prostate cancer. This management approach allows men to avoid expensive prostate cancer treatments and their well-documented adverse events of erectile dysfunction and incontinence. However, AS is associated with illness uncertainty and reduced quality of life (QOL; Wallace, 2003). An uncertainty management intervention (UMI) was developed by Mishel et al. (2002) to manage uncertainty in women treated for breast cancer and men treated for prostate cancer. However, the UMI was not developed for men undergoing AS for prostate cancer and has not been adequately tested in this population. This article reports on the expansion of a theory-based intervention to manage the uncertainty associated with AS for prostate cancer. Intervention Theory (Sidani & Braden, 1998) is discussed as a framework for revising the UMI intervention for men undergoing AS for prostate cancer (UMI-AS). The article concludes with plans for testing of the expanded intervention and implications for the extended theory.

  16. Parton shower and NLO-matching uncertainties in Higgs boson pair production

    NASA Astrophysics Data System (ADS)

    Jones, Stephen; Kuttimalai, Silvan

    2018-02-01

    We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. We observe large uncertainties even in regions of phase space where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.

  17. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  18. Results of the CCRI(II)-S12.H-3 supplementary comparison: Comparison of methods for the calculation of the activity and standard uncertainty of a tritiated-water source measured using the LSC-TDCR method.

    PubMed

    Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian

    2018-04-01

    A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Evaluation of Fission Product Critical Experiments and Associated Biases for Burnup Credit Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T; Reed, Davis Allan

    2010-01-01

    One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less

  20. Assessment of Uncertainties for the NIST 1016 mm Guarded-Hot-Plate Apparatus: Extended Analysis for Low-Density Fibrous-Glass Thermal Insulation.

    PubMed

    Zarr, Robert R

    2010-01-01

    An assessment of uncertainties for the National Institute of Standards and Technology (NIST) 1016 mm Guarded-Hot-Plate apparatus is presented. The uncertainties are reported in a format consistent with current NIST policy on the expression of measurement uncertainty. The report describes a procedure for determination of component uncertainties for thermal conductivity and thermal resistance for the apparatus under operation in either the double-sided or single-sided mode of operation. An extensive example for computation of uncertainties for the single-sided mode of operation is provided for a low-density fibrous-glass blanket thermal insulation. For this material, the relative expanded uncertainty for thermal resistance increases from 1 % for a thickness of 25.4 mm to 3 % for a thickness of 228.6 mm. Although these uncertainties have been developed for a particular insulation material, the procedure and, to a lesser extent, the results are applicable to other insulation materials measured at a mean temperature close to 297 K (23.9 °C, 75 °F). The analysis identifies dominant components of uncertainty and, thus, potential areas for future improvement in the measurement process. For the NIST 1016 mm Guarded-Hot-Plate apparatus, considerable improvement, especially at higher values of thermal resistance, may be realized by developing better control strategies for guarding that include better measurement techniques for the guard gap thermopile voltage and the temperature sensors.

  1. Assessment of Uncertainties for the NIST 1016 mm Guarded-Hot-Plate Apparatus: Extended Analysis for Low-Density Fibrous-Glass Thermal Insulation

    PubMed Central

    Zarr, Robert R.

    2010-01-01

    An assessment of uncertainties for the National Institute of Standards and Technology (NIST) 1016 mm Guarded-Hot-Plate apparatus is presented. The uncertainties are reported in a format consistent with current NIST policy on the expression of measurement uncertainty. The report describes a procedure for determination of component uncertainties for thermal conductivity and thermal resistance for the apparatus under operation in either the double-sided or single-sided mode of operation. An extensive example for computation of uncertainties for the single-sided mode of operation is provided for a low-density fibrous-glass blanket thermal insulation. For this material, the relative expanded uncertainty for thermal resistance increases from 1 % for a thickness of 25.4 mm to 3 % for a thickness of 228.6 mm. Although these uncertainties have been developed for a particular insulation material, the procedure and, to a lesser extent, the results are applicable to other insulation materials measured at a mean temperature close to 297 K (23.9 °C, 75 °F). The analysis identifies dominant components of uncertainty and, thus, potential areas for future improvement in the measurement process. For the NIST 1016 mm Guarded-Hot-Plate apparatus, considerable improvement, especially at higher values of thermal resistance, may be realized by developing better control strategies for guarding that include better measurement techniques for the guard gap thermopile voltage and the temperature sensors. PMID:27134779

  2. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    PubMed

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  3. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE PAGES

    Hu, Jianwei; Gauld, Ian C.

    2014-12-01

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  4. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  5. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Gauld, Ian C.

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  6. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bledsoe, Keith C.

    2015-04-01

    The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less

  7. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  8. Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review

    NASA Technical Reports Server (NTRS)

    Tripp, John S.

    1999-01-01

    This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.

  9. Calculating potential fields using microchannel spatial light modulators

    NASA Technical Reports Server (NTRS)

    Reid, Max B.

    1993-01-01

    We describe and present experimental results of the optical calculation of potential field maps suitable for mobile robot navigation. The optical computation employs two write modes of a microchannel spatial light modulator (MSLM). In one mode, written patterns expand spatially, and this characteristic is used to create an extended two dimensional function representing the influence of the goal in a robot's workspace. Distinct obstacle patterns are written in a second, non-expanding, mode. A model of the mechanisms determining MSLM write mode characteristics is developed and used to derive the optical calculation time for full potential field maps. Field calculations at a few hertz are possible with current technology, and calculation time vs. map size scales favorably in comparison to digital electronic computation.

  10. Parton shower and NLO-matching uncertainties in Higgs boson pair production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Stephen; Kuttimalai, Silvan

    We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. Here, we observe large uncertainties even in regions of phase spacemore » where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.« less

  11. Parton shower and NLO-matching uncertainties in Higgs boson pair production

    DOE PAGES

    Jones, Stephen; Kuttimalai, Silvan

    2018-02-28

    We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. Here, we observe large uncertainties even in regions of phase spacemore » where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.« less

  12. SU-F-T-316: A Model to Deal with Dosimetric and Delivery Uncertainties in Radiotherapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haering, P; Lang, C; Splinter, M

    2016-06-15

    Purpose The conventional way of dealing with uncertainties resulting from dose calculation or beam delivery in IMRT, is to do verification measurements for the plan in question. Here we present an alternative based on recommendations given in the AAPM 142 report and treatment specific parameters that model the uncertainties for the plan delivery. Methods Basis of the model is the assignment of uncertainty parameters to all segment fields or control point sequences of a plan. The given field shape is analyzed for complexity, dose rate, number of MU, field size related output as well as factors for in/out field positionmore » and penumbra regions. Together with depth related uncertainties, a 3D matrix is generated by a projection algorithm. Patient anatomy is included as uncertainty CT data set as well. Therefore, object density is classified in 4 categories close to water, lung, bone and gradient regions with additional uncertainties. The result is then exported as a DICOM dose file by the software tool (written in IDL, Exelis), having the given resolution and target point. Results Uncertainty matrixes for several patient cases have been calculated and compared side by side in the planning system. The result is not quite always intuitive but it clearly indicates high and low uncertainties related to OARs and target volumes as well as to measured gamma distributions.ConclusionThe imported uncertainty datasets may help the treatment planner to understand the complexity of the treatment plan. He then might decide to change the plan to produce a more suited uncertainty distribution, e.g. by changing the beam angles the high uncertainty spots can be influenced or try to use another treatment setup, resulting in a plan with lower uncertainties. A next step could be to include such a model into the optimization algorithm to add a new dose uncertainty constraint.« less

  13. Tissue regeneration during tissue expansion and choosing an expander

    PubMed Central

    Agrawal, K.; Agrawal, S.

    2012-01-01

    This paper reviews the various aspects of tissue regeneration during the process of tissue expansion. “Creep” and mechanical and biological “stretch” are responsible for expansion. During expansion, the epidermis thickens, the dermis thins out, vascularity improves, significant angiogenesis occurs, hair telogen phase becomes shorter and the peripheral nerves, vessels and muscle fibres lengthen. Expansion is associated with molecular changes in the tissue. Almost all these biological changes are reversible after the removal of the expander.This study is also aimed at reviewing the difficulty in deciding the volume and dimension of the expander for a defect. Basic mathematical formulae and the computer programmes for calculating the dimension of tissue expanders, although available in the literature, are not popular. A user-friendly computer programme based on the easily available Microsoft Excel spread sheet has been introduced. When we feed the area of defect and base dimension of the donor area or tissue expander, this programme calculates the volume and height of the expander. The shape of the expander is decided clinically based on the availability of the donor area and the designing of the future tissue movement. Today, tissue expansion is better understood biologically and mechanically. Clinical judgement remains indispensable in choosing the size and shape of the tissue expander. PMID:22754146

  14. Accounting for Uncertainty and Time Lags in Equivalency Calculations for Offsetting in Aquatic Resources Management Programs

    NASA Astrophysics Data System (ADS)

    Bradford, Michael J.

    2017-10-01

    Biodiversity offset programs attempt to minimize unavoidable environmental impacts of anthropogenic activities by requiring offsetting measures in sufficient quantity to counterbalance losses due to the activity. Multipliers, or offsetting ratios, have been used to increase the amount of offsets to account for uncertainty but those ratios have generally been derived from theoretical or ad-hoc considerations. I analyzed uncertainty in the offsetting process in the context of offsetting for impacts to freshwater fisheries productivity. For aquatic habitats I demonstrate that an empirical risk-based approach for evaluating prediction uncertainty is feasible, and if data are available appropriate adjustments to offset requirements can be estimated. For two data-rich examples I estimate multipliers in the range of 1.5:1 - 2.5:1 are sufficient to account for the uncertainty in the prediction of gains and losses. For aquatic habitats adjustments for time delays in the delivery of offset benefits can also be calculated and are likely smaller than those for prediction uncertainty. However, the success of a biodiversity offsetting program will also depend on the management of the other components of risk not addressed by these adjustments.

  15. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE PAGES

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-15

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  16. Fuel cycle cost uncertainty from nuclear fuel cycle comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J.; McNelis, D.; Yim, M.S.

    2013-07-01

    This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for themore » discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.« less

  17. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  18. Evaluation of Effective Sources in Uncertainty Measurements of Personal Dosimetry by a Harshaw TLD System

    PubMed Central

    Hosseini Pooya, SM; Orouji, T

    2014-01-01

    Background: The accurate results of the individual doses in personal dosimety which are reported by the service providers in personal dosimetry are very important. There are national / international criteria for acceptable dosimetry system performance. Objective: In this research, the sources of uncertainties are identified, measured and calculated in a personal dosimetry system by TLD. Method: These sources are included; inhomogeneity of TLDs sensitivity, variability of TLD readings due to limited sensitivity and background, energy dependence, directional dependence, non-linearity of the response, fading, dependent on ambient temperature / humidity and calibration errors, which may affect on the dose responses. Some parameters which influence on the above sources of uncertainty are studied for Harshaw TLD-100 cards dosimeters as well as the hot gas Harshaw 6600 TLD reader system. Results: The individual uncertainties of each sources was measured less than 6.7% in 68% confidence level. The total uncertainty was calculated 17.5% with 95% confidence level. Conclusion: The TLD-100 personal dosimeters as well as the Harshaw TLD-100 reader 6600 system show the total uncertainty value which is less than that of admissible value of 42% for personal dosimetry services. PMID:25505769

  19. Accounting for Uncertainty and Time Lags in Equivalency Calculations for Offsetting in Aquatic Resources Management Programs.

    PubMed

    Bradford, Michael J

    2017-10-01

    Biodiversity offset programs attempt to minimize unavoidable environmental impacts of anthropogenic activities by requiring offsetting measures in sufficient quantity to counterbalance losses due to the activity. Multipliers, or offsetting ratios, have been used to increase the amount of offsets to account for uncertainty but those ratios have generally been derived from theoretical or ad-hoc considerations. I analyzed uncertainty in the offsetting process in the context of offsetting for impacts to freshwater fisheries productivity. For aquatic habitats I demonstrate that an empirical risk-based approach for evaluating prediction uncertainty is feasible, and if data are available appropriate adjustments to offset requirements can be estimated. For two data-rich examples I estimate multipliers in the range of 1.5:1 - 2.5:1 are sufficient to account for the uncertainty in the prediction of gains and losses. For aquatic habitats adjustments for time delays in the delivery of offset benefits can also be calculated and are likely smaller than those for prediction uncertainty. However, the success of a biodiversity offsetting program will also depend on the management of the other components of risk not addressed by these adjustments.

  20. A data assimilation technique to account for the nonlinear dependence of scattering microwave observations of precipitation

    NASA Astrophysics Data System (ADS)

    Haddad, Z. S.; Steward, J. L.; Tseng, H.-C.; Vukicevic, T.; Chen, S.-H.; Hristova-Veleva, S.

    2015-06-01

    Satellite microwave observations of rain, whether from radar or passive radiometers, depend in a very crucial way on the vertical distribution of the condensed water mass and on the types and sizes of the hydrometeors in the volume resolved by the instrument. This crucial dependence is nonlinear, with different types and orders of nonlinearity that are due to differences in the absorption/emission and scattering signatures at the different instrument frequencies. Because it is not monotone as a function of the underlying condensed water mass, the nonlinearity requires great care in its representation in the observation operator, as the inevitable uncertainties in the numerous precipitation variables are not directly convertible into an additive white uncertainty in the forward calculated observations. In particular, when attempting to assimilate such data into a cloud-permitting model, special care needs to be applied to describe and quantify the expected uncertainty in the observations operator in order not to turn the implicit white additive uncertainty on the input values into complicated biases in the calculated radiances. One approach would be to calculate the means and covariances of the nonlinearly calculated radiances given an a priori joint distribution for the input variables. This would be a very resource-intensive proposal if performed in real time. We propose a representation of the observation operator based on performing this moment calculation off line, with a dimensionality reduction step to allow for the effective calculation of the observation operator and the associated covariance in real time during the assimilation. The approach is applicable to other remotely sensed observations that depend nonlinearly on model variables, including wind vector fields. The approach has been successfully applied to the case of tropical cyclones, where the organization of the system helps in identifying the dimensionality-reducing variables.

  1. Conduct Disorder and Oppositional Defiant Disorder in a National Sample: Developmental Epidemiology

    ERIC Educational Resources Information Center

    Maughan, Barbara; Rowe, Richard; Messer, Julie; Goodman, Robert; Meltzer, Howard

    2004-01-01

    Background: Despite an expanding epidemiological evidence base, uncertainties remain over key aspects of the epidemiology of the "antisocial" disorders in childhood and adolescence. Methods: We used cross-sectional data on a nationally representative sample of 10,438 5-15-year-olds drawn from the 1999 British Child Mental Health Survey…

  2. Shiftwork: A Chaos Theory of Careers Agenda for Change in Career Counseling

    ERIC Educational Resources Information Center

    Bright, Jim E. H.; Pryor, Robert G. L.

    2008-01-01

    This paper presents the implications of the Chaos Theory of Careers for career counselling in the form of Shiftwork. Shiftwork represents an expanded paradigm of career counselling based on complexity, change and uncertainty. Eleven paradigm shifts for careers counselling are outlined to incorporate into contemporary practice pattern making, an…

  3. Bilateral key comparison on luminous flux COOMET.PR-K4.1. final report

    NASA Astrophysics Data System (ADS)

    Huriev, M.; Khlevnoy, B.; Tolstykh, G.; Ivashin, E.; Gorchkova, T.

    2017-01-01

    This report describes an international bilateral key comparison on luminous flux of tungsten lamps between the National Scientific Centre 'Institute of Metrology' (NSC 'IM', Ukraine) and All-Russian Research Institute for Optical and Physical Measurements (VNIIOFI, Russia). The comparison is intended to determine the Degrees of Equivalence (DoE) for NSC 'IM' and the associated expanded uncertainty. VNIIOFI acts as a laboratory, providing the link to the comparison CCPR-K4. The comparison used a set of tungsten incandescent lamps as a transfer standard with a luminous flux of approximately 3800 lm. The determined DoE of NSC 'IM' is -0.94% with an expanded uncertainty (k = 2) of 1.05%. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCPR, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  4. KEY COMPARISON: Report of the spectral irradiance comparison EURAMET.PR-K1.a.1 between MIKES (Finland) and NIMT (Thailand)

    NASA Astrophysics Data System (ADS)

    Ojanen, M.; Shpak, M.; Kärhä, P.; Leecharoen, R.; Ikonen, E.

    2009-01-01

    A bilateral comparison of the spectral irradiance scales between MIKES (Finland) and NIMT (Thailand) was carried out at 22 wavelengths between 290 nm and 900 nm. MIKES acted as the pilot and link to the results of the key comparison CCPR-K1.a. The spectral irradiance values measured by NIMT generally agree with the key comparison reference value within the expanded uncertainty. The only exceptions are results at wavelengths 300 nm, 450 nm and 500 nm, where the ratios between the degree of equivalence (DoE) and the expanded uncertainty of DoE (k = 2) are 1.0, 1.4 and 1.2, respectively. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCPR, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  5. 1993 Intercomparison of Photometric Units Maintained at NIST (USA) and PTB (Germany)

    PubMed Central

    Ohno, Yoshihiro; Sauter, Georg

    1995-01-01

    A bilateral intercomparison of photometric units between NIST, USA and PTB, Germany has been conducted to update the knowledge of the relationship between the photometric units disseminated in each country. The luminous intensity unit (cd) and the luminous flux unit (lm) maintained at both laboratories are compared by circulating transfer standard lamps. Also, the photometric responsivity sv is compared by circulating a V(λ)-corrected detector with a built-in current-to-voltage converter. The results show that the difference of luminous intensity unit between NIST and PTB, (PTB-NIST)/NIST, is 0.2 % with a relative expanded uncertainty (coverage factor k = 2) of 0.24 %. The difference is reduced significantly from that at the 1985 CCPR intercomparison (0.9 %). The difference in luminous flux unit, (PTB – NIST)/NIST, is found to be 1.5 % with a relative expanded uncertainty (coverage factor k =2) of 0.15 %. The difference remained nearly the same as that at the 1985 intercomparison (1.6 %). These results agree with what is predicted from the history of maintaining the units at each laboratory. PMID:29151737

  6. An end-to-end assessment of range uncertainty in proton therapy using animal tissues.

    PubMed

    Zheng, Yuanshui; Kang, Yixiu; Zeidan, Omar; Schreuder, Niek

    2016-11-21

    Accurate assessment of range uncertainty is critical in proton therapy. However, there is a lack of data and consensus on how to evaluate the appropriate amount of uncertainty. The purpose of this study is to quantify the range uncertainty in various treatment conditions in proton therapy, using transmission measurements through various animal tissues. Animal tissues, including a pig head, beef steak, and lamb leg, were used in this study. For each tissue, an end-to-end test closely imitating patient treatments was performed. This included CT scan simulation, treatment planning, image-guided alignment, and beam delivery. Radio-chromic films were placed at various depths in the distal dose falloff region to measure depth dose. Comparisons between measured and calculated doses were used to evaluate range differences. The dose difference at the distal falloff between measurement and calculation depends on tissue type and treatment conditions. The estimated range difference was up to 5, 6 and 4 mm for the pig head, beef steak, and lamb leg irradiation, respectively. Our study shows that the TPS was able to calculate proton range within about 1.5% plus 1.5 mm. Accurate assessment of range uncertainty in treatment planning would allow better optimization of proton beam treatment, thus fully achieving proton beams' superior dose advantage over conventional photon-based radiation therapy.

  7. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    DOE PAGES

    Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...

    2014-10-12

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less

  8. A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration

    Treesearch

    Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans

    2015-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...

  9. High-resolution Fourier transform measurements of air-induced broadening and shift coefficients in the 0002-0000 main isotopologue band of nitrous oxide

    NASA Astrophysics Data System (ADS)

    Werwein, Viktor; Li, Gang; Serdyukov, Anton; Brunzendorf, Jens; Werhahn, Olav; Ebert, Volker

    2018-06-01

    In the present study, we report highly accurate air-induced broadening and shift coefficients for the nitrous oxide (N2O) 0002-0000 band at 2.26 μm of the main isotopologue retrieved from high-resolution Fourier transform infrared (FTIR) measurements with metrologically determined pressure, temperature, absorption path length and chemical composition. Most of our retrieved air-broadening coefficients agree with previously generated datasets within the expanded (confidence interval of 95%) uncertainties. For the air-shift coefficients our results suggest a different rotational dependence compared to literature. The present study benefits from improved measurement conditions and a detailed metrological uncertainty description. Comparing to literature, the uncertainties of the previous broadening and shift coefficients are improved by a factor of up to 39 and up to 22, respectively.

  10. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Magome, T; Nakano, M

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less

  11. Assessment of Uncertainty in the Determination of Activation Energy for Polymeric Materials

    NASA Technical Reports Server (NTRS)

    Darby, Stephania P.; Landrum, D. Brian; Coleman, Hugh W.

    1998-01-01

    An assessment of the experimental uncertainty in obtaining the kinetic activation energy from thermogravimetric analysis (TGA) data is presented. A neat phenolic resin, Borden SC1O08, was heated at three heating rates to obtain weight loss vs temperature data. Activation energy was calculated by two methods: the traditional Flynn and Wall method based on the slope of log(q) versus 1/T, and a modification of this method where the ordinate and abscissa are reversed in the linear regression. The modified method produced a more accurate curve fit of the data, was more sensitive to data nonlinearity, and gave a value of activation energy 75 percent greater than the original method. An uncertainty analysis using the modified method yielded a 60 percent uncertainty in the average activation energy. Based on this result, the activation energy for a carbon-phenolic material was doubled and used to calculate the ablation rate In a typical solid rocket environment. Doubling the activation energy increased surface recession by 3 percent. Current TGA data reduction techniques that use the traditional Flynn and Wall approach to calculate activation energy should be changed to the modified method.

  12. A comparison of water use and water-use-efficiency of maize and biomass sorghum in the rain-fed, Midwestern, US.

    NASA Astrophysics Data System (ADS)

    Roby, M.; Salas Fernandez, M.; VanLoocke, A. D.

    2014-12-01

    There is growing consensus among model projections that climate change may increase the frequency and intensity of drought in the rain-fed, maize-dominated, Midwestern US. Uncertainty in the availability of water, combined with an increased demand for non-grain ethanol feedstock, may necessitate expanding the production of more water-use-efficient and less drought sensitive crops for biomass applications. Research suggests that biomass sorghum [Sorghum bicolor (L.) Moench] is more drought tolerant and can produce more biomass than maize in water-limiting environments; however, sorghum water use data are limited for the rain-fed Midwestern US. To address this gap, a replicated (n=3) side-by-side trial was established in Ames, Iowa to determine cumulative water use and water-use-efficiency of maize and biomass sorghum. Data were collected by micrometeorological stations located in the center of each plot and used to calculate cumulative evapotranspiration throughout the 2014 growing season using the residual energy balance method. Continuous micrometeorological measurements were supplemented by periodic measurements of leaf area index (LAI) and above-ground biomass. At mid-point of the growing season, preliminary data analysis revealed similar water use for sorghum and maize. Data collection will continue for the remainder of the growing season, at which point a stronger conclusion can be drawn. This research will provide important insight on the potential hydrologic effects of expanding biomass sorghum production in the Midwestern, US.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, Hali, E-mail: hamorris@ualberta.ca; Meno

    Purpose: To estimate the total dosimetric uncertainty at the tumor apex for ocular brachytherapy treatments delivered using 16 mm Collaborative Ocular Melanoma Study (COMS) and Super9 plaques loaded with {sup 125}I seeds in order to determine the size of the apex margin that would be required to ensure adequate dosimetric coverage of the tumor. Methods: The total dosimetric uncertainty was assessed for three reference tumor heights: 3, 5, and 10 mm, using the Guide to the expression of Uncertainty in Measurement/National Institute of Standards and Technology approach. Uncertainties pertaining to seed construction, source strength, plaque assembly, treatment planning calculations, tumormore » height measurement, plaque placement, and plaque tilt for a simple dome-shaped tumor were investigated and quantified to estimate the total dosimetric uncertainty at the tumor apex. Uncertainties in seed construction were determined using EBT3 Gafchromic film measurements around single seeds, plaque assembly uncertainties were determined using high resolution microCT scanning of loaded plaques to measure seed positions in the plaques, and all other uncertainties were determined from the previously published studies and recommended values. All dose calculations were performed using PLAQUESIMULATOR v5.7.6 ophthalmic treatment planning system with the inclusion of plaque heterogeneity corrections. Results: The total dosimetric uncertainties at 3, 5, and 10 mm tumor heights for the 16 mm COMS plaque were 17.3%, 16.1%, and 14.2%, respectively, and for the Super9 plaque were 18.2%, 14.4%, and 13.1%, respectively (all values with coverage factor k = 2). The apex margins at 3, 5, and 10 mm tumor heights required to adequately account for these uncertainties were 1.3, 1.3, and 1.4 mm, respectively, for the 16 mm COMS plaque, and 1.8, 1.4, and 1.2 mm, respectively, for the Super9 plaque. These uncertainties and associated margins are dependent on the dose gradient at the given prescription depth, thus resulting in the changing uncertainties and margins with depth. Conclusions: The margins determined in this work can be used as a guide for determining an appropriate apex margin for a given treatment, which can be chosen based on the tumor height. The required margin may need to be increased for more complex scenarios (mushroom shaped tumors, tumors close to the optic nerve, oblique muscle related tilt, etc.) than the simple dome-shaped tumor examined and should be chosen on a case-by-case basis. The sources of uncertainty contributing most significantly to the total dosimetric uncertainty are seed placement within the plaques, treatment planning calculations, tumor height measurement, and plaque tilt. This work presents an uncertainty-based, rational approach to estimating an appropriate apex margin.« less

  14. CCQM-K90, formaldehyde in nitrogen, 2 μmol mol-1 Final report

    NASA Astrophysics Data System (ADS)

    Viallon, Joële; Flores, Edgar; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert Ian; Kim, D.; Kim, Y. D.; Lee, S.; Persijn, S.; Konopelko, L. A.; Kustikov, Y. A.; Malginov, A. V.; Chubchenko, I. K.; Klimov, A. Y.; Efremova, O. V.; Zhou, Z.; Possolo, A.; Shimosaka, T.; Brewer, P.; Macé, T.; Ferracci, Valerio; Brown, Richard J. C.; Aoki, Nobuyuki

    2017-01-01

    The CCQM-K90 comparison is designed to evaluate the level of comparability of national metrology institutes (NMI) or designated institutes (DI) measurement capabilities for formaldehyde in nitrogen at a nominal mole fraction of 2 μmol mol-1. The comparison was organised by the BIPM using a suite of gas mixtures prepared by a producer of specialty calibration gases. The BIPM assigned the formaldehyde mole fraction in the mixtures by comparison with primary mixtures generated dynamically by permeation coupled with continuous weighing in a magnetic suspension balance. The BIPM developed two dynamic sources of formaldehyde in nitrogen that provide two independent values of the formaldehyde mole fraction: the first one based on diffusion of trioxane followed by thermal conversion to formaldehyde, the second one based on permeation of formaldehyde from paraformaldehyde contained in a permeation tube. Two independent analytical methods, based on cavity ring down spectroscopy (CRDS) and Fourier transform infrared spectroscopy (FTIR) were used for the assignment procedure. Each participating institute was provided with one transfer standard and value assigned the formaldehyde mole fraction in the standard based on its own measurement capabilities. The stability of the formaldehyde mole fraction in transfer standards was deduced from repeated measurements performed at the BIPM before and after measurements performed at participating institutes. In addition, 5 control standards were kept at the BIPM for regular measurements during the course of the comparison. Temporal trends that approximately describe the linear decrease of the amount-of-substance fraction of formaldehyde in nitrogen in the transfer standards over time were estimated by two different mathematical treatments, the outcomes of which were proposed to participants. The two treatments also differed in the way measurement uncertainties arising from measurements performed at the BIPM were propagated to the uncertainty of the trend parameters, as well as how the dispersion of the dates when measurements were made by the participants was taken into account. Upon decision of the participants, the Key Comparison Reference Values were assigned by the BIPM using the largest uncertainty for measurements performed at the BIPM, linear regression without weight to calculate the trend parameters, and not taking into account the dispersion of dates for measurements made by the participant. Each transfer standard was assigned its own reference value and associated expanded uncertainty. An expression for the degree of equivalence between each participating institute and the KCRV was calculated from the comparison results and measurement uncertainties submitted by participating laboratories. Results of the alternative mathematical treatment are presented in annex of this report. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  15. Uncertainty of fast biological radiation dose assessment for emergency response scenarios.

    PubMed

    Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens

    2017-01-01

    Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.

  16. Radiation Effect on Human Tissue

    NASA Technical Reports Server (NTRS)

    Richmond, Robert C.; Cruz, Angela; Bors, Karen; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Predicting the occurrence of human cancer following exposure of an epidemiologic population to any agent causing genetic damage is a difficult task. To an approximation, this is because the uncertainty of uniform exposure to the damaging agent, and the uncertainty of uniform processing of that damage within a complex set of biological variables, degrade the confidence of predicting the delayed expression of cancer as a relatively rare event within clinically normal individuals. This situation begs the need for alternate controlled experimental models that are predictive for the development of human cancer following exposures to agents causing genetic damage. Such models historically have not been of substantial proven value. It is more recently encouraging, however, that developments in molecular and cell biology have led to an expanded knowledge of human carcinogenesis, and of molecular markers associated with that process. It is therefore appropriate to consider new laboratory models developed to accomodate that expanded knowledge in order to assess the cancer risks associated with exposures to genotoxic agents. When ionizing radiation of space is the genotoxic agent, then a series of additional considerations for human cancer risk assessment must also be applied. These include the dose of radiation absorbed by tissue at different locations in the body, the quality of the absorbed radiation, the rate at which absorbed dose accumulates in tissue, the way in which absorbed dose is measured and calculated, and the alterations in incident radiation caused by shielding materials. It is clear that human cancer risk assessment for damage caused by ionizing radiation is a multidisciplinary responsibility, and that within this responsibility no single discipline can hold disproportionate sway if a risk assessment model of radiation-induced human cancer is to be developed that has proven value. Biomolecular and cellular markers from the work reported here are considered for use in assessing human cancer risk related to exposure to space radiation. This potential use must be integrated within the specified multidisciplinary context in order to create a new tool of molecular epidemiology that can hopefully then realistically assess this cancer risk.

  17. SU-E-T-615: Plan Comparison Between Photon IMRT and Proton Plans Incorporating Uncertainty Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Wessels, B; Jesseph, F

    2015-06-15

    Purpose: In this study, we investigate the effect of setup uncertainty on DVH calculations which may impact plan comparison. Methods: Treatment plans (6 MV VMAT calculated on Pinnacle TPS) were chosen for different disease sites: brain, prostate, H&N and spine in this retrospective study. A proton plan (PP) using double scattering beams was generated for each selected VMAT plan subject to the same set of dose-volume constraints as in VMAT. An uncertainty analysis was incorporated on the DVH calculations in which isocenter shifts from 1 to 5 mm in each of the ±x, ±y and ±z directions were used tomore » simulate the setup uncertainty and residual positioning errors. A total of 40 different combinations of isocenter shifts were used in the re-calculation of DVH of the PTV and the various OARs for both the VMAT and the corresponding PT. Results: For the brain case, both VMAT and PP are comparable in PTV coverage and OAR sparing, and VMAT is a clear choice for treatment due to its ease of delivery. However, when incorporating isoshifts in DVH calculations, a significant change in dose-volume relationship emerges. For example, both VMAT and PT provide adequate coverage, even with ±3mm isoshift. However, +3mm isoshift results in increase of V40(Lcochlea, VMAT) from 7.2% in the original plan to 45% and V40(R cochlea, VMAT) from 75% to 92%. For protons, V40(Lcochlea, PT) increases from 62% in the initial plan to 75%, while V40(Rcochea, PT) increases from 7% to 26%. Conclusion: DVH alone may not be sufficient to allow an unequivocal decision in plan comparison, especially when two rival plans are very similar in both PTV coverage and OAR sparing. It is a good practice to incorporate uncertainty analysis on photon and proton plan comparison studies to test the plan robustness in plan evaluation.« less

  18. ISA implementation and uncertainty: a literature review and expert elicitation study.

    PubMed

    van der Pas, J W G M; Marchau, V A W J; Walker, W E; van Wee, G P; Vlassenroot, S H

    2012-09-01

    Each day, an average of over 116 people die from traffic accidents in the European Union. One out of three fatalities is estimated to be the result of speeding. The current state of technology makes it possible to make speeding more difficult, or even impossible, by placing intelligent speed limiters (so called ISA devices) in vehicles. Although the ISA technology has been available for some years now, and reducing the number of road traffic fatalities and injuries has been high on the European political agenda, implementation still seems to be far away. Experts indicate that there are still too many uncertainties surrounding ISA implementation, and dealing with these uncertainties is essential for implementing ISA. In this paper, a systematic and representative inventory of the uncertainties is made based upon the literature. Furthermore, experts in the field of ISA were surveyed and asked which uncertainties are barriers for ISA implementation, and how uncertain these uncertainties are. We found that the long-term effects and the effects of large-scale implementation of ISA are still uncertain and are the most important barriers for the implementation of the most effective types of ISA. One way to deal with these uncertainties would be to start implementation on a small scale and gradually expand the penetration, in order to learn how ISA influences the transport system over time. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. AEROFROSH: a shock condition calculator for multi-component fuel aerosol-laden flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Haylett, D. R.; Davidson, D. F.

    Here, this paper introduces an algorithm that determines the thermodynamic conditions behind incident and reflectedshocksinaerosol-ladenflows.Importantly,the algorithm accounts for the effects of droplet evaporation on post-shock properties. Additionally, this article describes an algorithm for resolving the effects of multiple-component- fuel droplets. This article presents the solution methodology and compares the results to those of another similar shock calculator. It also provides examples to show the impact of droplets on post-shock properties and the impact that multi-component fuel droplets have on shock experimental parameters. Finally, this paper presents a detailed uncertainty analysis of this algorithm’s calculations given typical exper- imental uncertainties

  20. AEROFROSH: a shock condition calculator for multi-component fuel aerosol-laden flows

    DOE PAGES

    Campbell, Matthew Frederick; Haylett, D. R.; Davidson, D. F.; ...

    2015-08-18

    Here, this paper introduces an algorithm that determines the thermodynamic conditions behind incident and reflectedshocksinaerosol-ladenflows.Importantly,the algorithm accounts for the effects of droplet evaporation on post-shock properties. Additionally, this article describes an algorithm for resolving the effects of multiple-component- fuel droplets. This article presents the solution methodology and compares the results to those of another similar shock calculator. It also provides examples to show the impact of droplets on post-shock properties and the impact that multi-component fuel droplets have on shock experimental parameters. Finally, this paper presents a detailed uncertainty analysis of this algorithm’s calculations given typical exper- imental uncertainties

  1. AtomDB: Expanding an Accessible and Accurate Atomic Database for X-ray Astronomy

    NASA Astrophysics Data System (ADS)

    Smith, Randall

    Since its inception in 2001, the AtomDB has become the standard repository of accurate and accessible atomic data for the X-ray astrophysics community, including laboratory astrophysicists, observers, and modelers. Modern calculations of collisional excitation rates now exist - and are in AtomDB - for all abundant ions in a hot plasma. AtomDB has expanded beyond providing just a collisional model, and now also contains photoionization data from XSTAR as well as a charge exchange model, amongst others. However, building and maintaining an accurate and complete database that can fully exploit the diagnostic potential of high-resolution X-ray spectra requires further work. The Hitomi results, sadly limited as they were, demonstrated the urgent need for the best possible wavelength and rate data, not merely for the strongest lines but for the diagnostic features that may have 1% or less of the flux of the strong lines. In particular, incorporation of weak but powerfully diagnostic satellite lines will be crucial to understanding the spectra expected from upcoming deep observations with Chandra and XMM-Newton, as well as the XARM and Athena satellites. Beyond incorporating this new data, a number of groups, both experimental and theoretical, have begun to produce data with errors and/or sensitivity estimates. We plan to use this to create statistically meaningful spectral errors on collisional plasmas, providing practical uncertainties together with model spectra. We propose to continue to (1) engage the X-ray astrophysics community regarding their issues and needs, notably by a critical comparison with other related databases and tools, (2) enhance AtomDB to incorporate a large number of satellite lines as well as updated wavelengths with error estimates, (3) continue to update the AtomDB with the latest calculations and laboratory measurements, in particular velocity-dependent charge exchange rates, and (4) enhance existing tools, and create new ones as needed to increase the functionality of, and access to, AtomDB.

  2. Uncertainties that flight crews and dispatchers must consider when calculating the fuel needed for a flight

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    1996-01-01

    In 1993, fuel accounted for approximately 15 percent of an airline's expenses. Fuel consumption increases as fuel reserves increase because of the added weight to the aircraft. Calculating fuel reserves is a function of Federal Aviation Regulations, airline company policy, and factors that impact or are impacted by fuel usage enroute. This research studied how pilots and dispatchers determined the fuel needed for a flight and identified areas where improvements in methods may yield measurable fuel savings by (1) listing the uncertainties that contribute to adding contingency fuel, (2) obtaining the pilots' and dispatchers' perspective on how often each uncertainty occurred, and (3) obtaining pilots' and dispatchers' perspective on the fuel used for each occurrence. This study found that for the majority of the time, pilots felt that dispatchers included enough fuel. As for the uncertainties that flight crews and dispatchers account for, air traffic control accounts for 28% and weather uncertainties account for 58 percent. If improvements can be made in these two areas, a great potential exists to decrease the reserve required, and therefore, fuel usage without jeopardizing safety.

  3. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  4. Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.

  5. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE PAGES

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...

    2016-07-28

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  6. The Role of Economic Uncertainty on the Block Economic Value - a New Valuation Approach / Rola Czynnika Niepewności Przy Obliczaniu Wskaźnika Rentowności - Nowe Podejście

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Ataee-Pour, M.

    2012-12-01

    The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.

  7. Uncertainty Evaluation of Residential Central Air-conditioning Test System

    NASA Astrophysics Data System (ADS)

    Li, Haoxue

    2018-04-01

    According to national standards, property tests of air-conditioning are required. However, test results could be influenced by the precision of apparatus or measure errors. Therefore, uncertainty evaluation of property tests should be conducted. In this paper, the uncertainties are calculated on the property tests of Xinfei13.6 kW residential central air-conditioning. The evaluation result shows that the property tests are credible.

  8. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    Treesearch

    Harbin Li; Steven G. McNulty

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...

  9. Solar Eclipse Monitoring for Solar Energy Applications Using the Solar and Moon Position Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, I.

    2010-03-01

    This report includes a procedure for implementing an algorithm (described by Jean Meeus) to calculate the moon's zenith angle with uncertainty of +/-0.001 degrees and azimuth angle with uncertainty of +/-0.003 degrees. The step-by-step format presented here simplifies the complicated steps Meeus describes to calculate the Moon's position, and focuses on the Moon instead of the planets and stars. It also introduces some changes to accommodate for solar radiation applications.

  10. Evaluation and Uncertainty of a New Method to Detect Suspected Nuclear and WMD Activity: Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzeja, R.; Werth, D.; Buckley, R.

    The Atmospheric Technology Group at SRNL developed a new method to detect signals from Weapons of Mass Destruction (WMD) activities in a time series of chemical measurements at a downwind location. This method was tested with radioxenon measured in Russia and Japan after the 2013 underground test in North Korea. This LDRD calculated the uncertainty in the method with the measured data and also for a case with the signal reduced to 1/10 its measured value. The research showed that the uncertainty in the calculated probability of origin from the NK test site was small enough to confirm the test.more » The method was also wellbehaved for small signal strengths.« less

  11. The NIST Simple Guide for Evaluating and Expressing Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2016-11-01

    NIST has recently published guidance on the evaluation and expression of the uncertainty of NIST measurement results [1, 2], supplementing but not replacing B. N. Taylor and C. E. Kuyatt's (1994) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (NIST Technical Note 1297) [3], which tracks closely the Guide to the expression of uncertainty in measurement (GUM) [4], originally published in 1995 by the Joint Committee for Guides in Metrology of the International Bureau of Weights and Measures (BIPM). The scope of this Simple Guide, however, is much broader than the scope of both NIST Technical Note 1297 and the GUM, because it attempts to address several of the uncertainty evaluation challenges that have arisen at NIST since the 1990s, for example to include molecular biology, greenhouse gases and climate science measurements, and forensic science. The Simple Guide also expands the scope of those two other guidance documents by recognizing observation equations (that is, statistical models) as bona fide measurement models. These models are indispensable to reduce data from interlaboratory studies, to combine measurement results for the same measurand obtained by different methods, and to characterize the uncertainty of calibration and analysis functions used in the measurement of force, temperature, or composition of gas mixtures. This presentation reviews the salient aspects of the Simple Guide, illustrates the use of models and methods for uncertainty evaluation not contemplated in the GUM, and also demonstrates the NIST Uncertainty Machine [5] and the NIST Consensus Builder, which are web-based applications accessible worldwide that facilitate evaluations of measurement uncertainty and the characterization of consensus values in interlaboratory studies.

  12. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  13. Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler

    2016-01-01

    This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.

  14. Calculation of the detection limit in radiation measurements with systematic uncertainties

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, J. M.; Russ, W.; Venkataraman, R.; Young, B. M.

    2015-06-01

    The detection limit (LD) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.

  15. Final report on EURAMET.T-K6.1: Bilateral comparison of the realisations of local dew/frost-point temperature scales in the range -70 °C to +20 °C

    NASA Astrophysics Data System (ADS)

    Heinonen, Martti; Zvizdic, Davor; Sestan, Danijel

    2013-01-01

    As the European extension of the first CCT humidity key comparison, EUROMET.T-K6 was successfully completed in year 2008. After this comparison, a new low dew-point generator was introduced at LPM in Croatia as a result of progress in the EUROMET P912 project. With this new facility, the LPM uncertainties decreased significantly and the operating range became significantly wider. Therefore, it was decided to arrange a bilateral comparison between LPM and MIKES in Finland providing a link to EUROMET.T-K6 and CCT-K6. This comparison was carried out in a manner similar to other K6 comparisons but only one transfer standard was used instead of two units and the measurement point -70 °C was added to the measurement scheme. At all measurement points, the bilateral equivalence was well within the estimated expanded uncertainty at the approximately 95% confidence level. Also, the deviations of the LPM results from the EUROMET.T-K6 reference values were smaller than their expanded uncertainties. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  16. Evaluation of Neutron-induced Cross Sections and their Related Covariances with Physical Constraints

    NASA Astrophysics Data System (ADS)

    De Saint Jean, C.; Archier, P.; Privas, E.; Noguère, G.; Habert, B.; Tamagno, P.

    2018-02-01

    Nuclear data, along with numerical methods and the associated calculation schemes, continue to play a key role in reactor design, reactor core operating parameters calculations, fuel cycle management and criticality safety calculations. Due to the intensive use of Monte-Carlo calculations reducing numerical biases, the final accuracy of neutronic calculations increasingly depends on the quality of nuclear data used. This paper gives a broad picture of all ingredients treated by nuclear data evaluators during their analyses. After giving an introduction to nuclear data evaluation, we present implications of using the Bayesian inference to obtain evaluated cross sections and related uncertainties. In particular, a focus is made on systematic uncertainties appearing in the analysis of differential measurements as well as advantages and drawbacks one may encounter by analyzing integral experiments. The evaluation work is in general done independently in the resonance and in the continuum energy ranges giving rise to inconsistencies in evaluated files. For future evaluations on the whole energy range, we call attention to two innovative methods used to analyze several nuclear reaction models and impose constraints. Finally, we discuss suggestions for possible improvements in the evaluation process to master the quantification of uncertainties. These are associated with experiments (microscopic and integral), nuclear reaction theories and the Bayesian inference.

  17. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    NASA Astrophysics Data System (ADS)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for a hydrogen-oxygen system are relatively simple, thereby resulting in low thermodynamic reference value uncertainties. Hydrocarbon combustors, solid rocket motors and hybrid rocket motors have combustion gases containing complex molecules that will likely have thermodynamic reference values with large uncertainties. Thus, every chemical system should be analyzed in a similar manner as that shown in this work.

  18. Applied groundwater modeling, 2nd Edition

    USGS Publications Warehouse

    Anderson, Mary P.; Woessner, William W.; Hunt, Randall J.

    2015-01-01

    This second edition is extensively revised throughout with expanded discussion of modeling fundamentals and coverage of advances in model calibration and uncertainty analysis that are revolutionizing the science of groundwater modeling. The text is intended for undergraduate and graduate level courses in applied groundwater modeling and as a comprehensive reference for environmental consultants and scientists/engineers in industry and governmental agencies.

  19. CERT Resiliency Engineering Framework

    DTIC Science & Technology

    2007-03-01

    Heightened threat level and increasing uncertainty Shorter-lived skills 5 Operational risk management problems Poor planning and execution No asset...increasingly effective & efficient Today’s operational environment No operational boundaries Pervasive & rapidly changing technology Dynamic & expanding risks ...management function Seen as a technical function or responsibility Searching for magic bullet: CobiT , ITIL, ISO17799, NFP1600 Poorly defined and measured

  20. Origin and phylogeography of the wheat stem sawfly, Cephus cinctus Norton (Hymenoptera : Cephidae): implications for pest management

    USDA-ARS?s Scientific Manuscript database

    he wheat stem sawfly, Cephus cinctus Norton (Hymenoptera: Cephidae), is a key pest of wheat in the northern Great Plains of North America, and damage by this species has recently expanded southward. Current pest management practices are not very effective and uncertainties regarding its origin and i...

  1. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head and neck treatments. We conclude that the currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific adjustment. Routine verifications of treatment plans using MC simulations are recommended for patients with heterogeneous geometries.

  2. Computing the Entropy of Kerr-Newman Black Hole Without Brick Walls Method

    NASA Astrophysics Data System (ADS)

    Zhang, Li-Chun; Wu, Yue-Qin; Li, Huai-Fan; Ren, Zhao

    By using the entanglement entropy method, the statistical entropy of the Bose and Fermi fields in a thin film is calculated and the Bekenstein-Hawking entropy of Kerr-Newman black hole is obtained. Here, the Bose and Fermi fields are entangled with the quantum states in Kerr-Newman black hole and are outside of the horizon. The divergence of brick-wall model is avoided without any cutoff by the new equation of state density obtained with the generalized uncertainty principle. The calculation implies that the high density quantum states near the event horizon are strongly correlated with the quantum states in black hole. The black hole entropy is a quantum effect. It is an intrinsic characteristic of space-time. The ultraviolet cutoff in the brick-wall model is unreasonable. The generalized uncertainty principle should be considered in the high energy quantum field near the event horizon. From the calculation, the constant λ introduced in the generalized uncertainty principle is related to polar angle θ in an axisymmetric space-time.

  3. Substructure Versus Property-Level Dispersed Modes Calculation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Peck, Jeff A.; Bush, T. Jason; Fulcher, Clay W.

    2016-01-01

    This paper calculates the effect of perturbed finite element mass and stiffness values on the eigenvectors and eigenvalues of the finite element model. The structure is perturbed in two ways: at the "subelement" level and at the material property level. In the subelement eigenvalue uncertainty analysis the mass and stiffness of each subelement is perturbed by a factor before being assembled into the global matrices. In the property-level eigenvalue uncertainty analysis all material density and stiffness parameters of the structure are perturbed modified prior to the eigenvalue analysis. The eigenvalue and eigenvector dispersions of each analysis (subelement and property-level) are also calculated using an analytical sensitivity approximation. Two structural models are used to compare these methods: a cantilevered beam model, and a model of the Space Launch System. For each structural model it is shown how well the analytical sensitivity modes approximate the exact modes when the uncertainties are applied at the subelement level and at the property level.

  4. Comparison of beam position calculation methods for application in digital acquisition systems

    NASA Astrophysics Data System (ADS)

    Reiter, A.; Singh, R.

    2018-05-01

    Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.

  5. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  6. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  7. Precise dielectric property measurements and E-field probe calibration for specific absorption rate measurements using a rectangular waveguide

    PubMed Central

    Hakim, B M; Beard, B B; Davis, C C

    2018-01-01

    Specific absorption rate (SAR) measurements require accurate calculations of the dielectric properties of tissue-equivalent liquids and associated calibration of E-field probes. We developed a precise tissue-equivalent dielectric measurement and E-field probe calibration system. The system consists of a rectangular waveguide, electric field probe, and data control and acquisition system. Dielectric properties are calculated using the field attenuation factor inside the tissue-equivalent liquid and power reflectance inside the waveguide at the air/dielectric-slab interface. Calibration factors were calculated using isotropicity measurements of the E-field probe. The frequencies used are 900 MHz and 1800 MHz. The uncertainties of the measured values are within ±3%, at the 95% confidence level. Using the same waveguide for dielectric measurements as well as calibrating E-field probes used in SAR assessments eliminates a source of uncertainty. Moreover, we clearly identified the system parameters that affect the overall uncertainty of the measurement system. PMID:29520129

  8. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  9. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  10. Assessment of uncertainties in radiation-induced cancer risk predictions at clinically relevant doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, J.; Moteabbed, M.; Paganetti, H., E-mail: hpaganetti@mgh.harvard.edu

    2015-01-15

    Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagationmore » was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio of absolute risks between two modalities is less sensitive to the uncertainties in the risk model and can provide statistically significant estimates.« less

  11. Uncertain future, non-numeric preferences, and the fertility transition: A case study of rural Mozambique

    PubMed Central

    Hayford, Sarah R.; Agadjanian, Victor

    2012-01-01

    In many high-fertility countries, and especially in sub-Saharan Africa, substantial proportions of women give non-numeric responses when asked about desired family size. Demographic transition theory has interpreted responses of “don’t know” or “up to God” as evidence of fatalistic attitudes toward childbearing. Alternatively, these responses can be understood as meaningful reactions to uncertainty about the future. Following this latter approach, we use data from rural Mozambique to test the hypothesis that non-numeric responses are more common when uncertainty about the future is greater. We expand on previous research linking child mortality and non-numeric fertility preferences by testing the predictive power of economic conditions, marital instability, and adult mortality. Results show that uncertainty related to adult and child mortality and to economic conditions predicts non-numeric responses, while marital stability is less strongly related. PMID:26430294

  12. Hydrometer calibration by hydrostatic weighing with automated liquid surface positioning

    NASA Astrophysics Data System (ADS)

    Aguilera, Jesus; Wright, John D.; Bean, Vern E.

    2008-01-01

    We describe an automated apparatus for calibrating hydrometers by hydrostatic weighing (Cuckow's method) in tridecane, a liquid of known, stable density, and with a relatively low surface tension and contact angle against glass. The apparatus uses a laser light sheet and a laser power meter to position the tridecane surface at the hydrometer scale mark to be calibrated with an uncertainty of 0.08 mm. The calibration results have an expanded uncertainty (with a coverage factor of 2) of 100 parts in 106 or less of the liquid density. We validated the apparatus by comparisons using water, toluene, tridecane and trichloroethylene, and found agreement within 40 parts in 106 or less. The new calibration method is consistent with earlier, manual calibrations performed by NIST. When customers use calibrated hydrometers, they may encounter uncertainties of 370 parts in 106 or larger due to surface tension, contact angle and temperature effects.

  13. Uncertain future, non-numeric preferences, and the fertility transition: A case study of rural Mozambique.

    PubMed

    Hayford, Sarah R; Agadjanian, Victor

    In many high-fertility countries, and especially in sub-Saharan Africa, substantial proportions of women give non-numeric responses when asked about desired family size. Demographic transition theory has interpreted responses of "don't know" or "up to God" as evidence of fatalistic attitudes toward childbearing. Alternatively, these responses can be understood as meaningful reactions to uncertainty about the future. Following this latter approach, we use data from rural Mozambique to test the hypothesis that non-numeric responses are more common when uncertainty about the future is greater. We expand on previous research linking child mortality and non-numeric fertility preferences by testing the predictive power of economic conditions, marital instability, and adult mortality. Results show that uncertainty related to adult and child mortality and to economic conditions predicts non-numeric responses, while marital stability is less strongly related.

  14. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  15. Development, validation, and uncertainty measurement of multi-residue analysis of organochlorine and organophosphorus pesticides using pressurized liquid extraction and dispersive-SPE techniques.

    PubMed

    Sanyal, Doyeli; Rani, Anita; Alam, Samsul; Gujral, Seema; Gupta, Ruchi

    2011-11-01

    Simple and efficient multi-residue analytical methods were developed and validated for the determination of 13 organochlorine and 17 organophosphorous pesticides from soil, spinach and eggplant. Techniques namely accelerated solvent extraction and dispersive SPE were used for sample preparations. The recovery studies were carried out by spiking the samples at three concentration levels (1 limit of quantification (LOQ), 5 LOQ, and 10 LOQ). The methods were subjected to a thorough validation procedure. The mean recovery for soil, spinach and eggplant were in the range of 70-120% with median CV (%) below 10%. The total uncertainty was evaluated taking four main independent sources viz., weighing, purity of the standard, GC calibration curve and repeatability under consideration. The expanded uncertainty was well below 10% for most of the pesticides and the rest fell in the range of 10-20%.

  16. Quantifying the Value of Perfect Information in Emergency Vaccination Campaigns.

    PubMed

    Bradbury, Naomi V; Probert, William J M; Shea, Katriona; Runge, Michael C; Fonnesbeck, Christopher J; Keeling, Matt J; Ferrari, Matthew J; Tildesley, Michael J

    2017-02-01

    Foot-and-mouth disease outbreaks in non-endemic countries can lead to large economic costs and livestock losses but the use of vaccination has been contentious, partly due to uncertainty about emergency FMD vaccination. Value of information methods can be applied to disease outbreak problems such as FMD in order to investigate the performance improvement from resolving uncertainties. Here we calculate the expected value of resolving uncertainty about vaccine efficacy, time delay to immunity after vaccination and daily vaccination capacity for a hypothetical FMD outbreak in the UK. If it were possible to resolve all uncertainty prior to the introduction of control, we could expect savings of £55 million in outbreak cost, 221,900 livestock culled and 4.3 days of outbreak duration. All vaccination strategies were found to be preferable to a culling only strategy. However, the optimal vaccination radius was found to be highly dependent upon vaccination capacity for all management objectives. We calculate that by resolving the uncertainty surrounding vaccination capacity we would expect to return over 85% of the above savings, regardless of management objective. It may be possible to resolve uncertainty about daily vaccination capacity before an outbreak, and this would enable decision makers to select the optimal control action via careful contingency planning.

  17. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  18. Managing Lunar and Mars Mission Radiation Risks. Part 1; Cancer Risks, Uncertainties, and Shielding Effectiveness

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Kim, Myung-Hee Y.; Ren, Lei

    2005-01-01

    This document addresses calculations of probability distribution functions (PDFs) representing uncertainties in projecting fatal cancer risk from galactic cosmic rays (GCR) and solar particle events (SPEs). PDFs are used to test the effectiveness of potential radiation shielding approaches. Monte-Carlo techniques are used to propagate uncertainties in risk coefficients determined from epidemiology data, dose and dose-rate reduction factors, quality factors, and physics models of radiation environments. Competing mortality risks and functional correlations in radiation quality factor uncertainties are treated in the calculations. The cancer risk uncertainty is about four-fold for lunar and Mars mission risk projections. For short-stay lunar missins (<180 d), SPEs present the most significant risk, but one effectively mitigated by shielding. For long-duration (>180 d) lunar or Mars missions, GCR risks may exceed radiation risk limits. While shielding materials are marginally effective in reducing GCR cancer risks because of the penetrating nature of GCR and secondary radiation produced in tissue by relativisitc particles, polyethylene or carbon composite shielding cannot be shown to significantly reduce risk compared to aluminum shielding. Therefore, improving our knowledge of space radiobiology to narrow uncertainties that lead to wide PDFs is the best approach to ensure radiation protection goals are met for space exploration.

  19. Quantifying the Value of Perfect Information in Emergency Vaccination Campaigns

    PubMed Central

    Probert, William J. M.; Shea, Katriona; Fonnesbeck, Christopher J.; Ferrari, Matthew J.; Tildesley, Michael J.

    2017-01-01

    Foot-and-mouth disease outbreaks in non-endemic countries can lead to large economic costs and livestock losses but the use of vaccination has been contentious, partly due to uncertainty about emergency FMD vaccination. Value of information methods can be applied to disease outbreak problems such as FMD in order to investigate the performance improvement from resolving uncertainties. Here we calculate the expected value of resolving uncertainty about vaccine efficacy, time delay to immunity after vaccination and daily vaccination capacity for a hypothetical FMD outbreak in the UK. If it were possible to resolve all uncertainty prior to the introduction of control, we could expect savings of £55 million in outbreak cost, 221,900 livestock culled and 4.3 days of outbreak duration. All vaccination strategies were found to be preferable to a culling only strategy. However, the optimal vaccination radius was found to be highly dependent upon vaccination capacity for all management objectives. We calculate that by resolving the uncertainty surrounding vaccination capacity we would expect to return over 85% of the above savings, regardless of management objective. It may be possible to resolve uncertainty about daily vaccination capacity before an outbreak, and this would enable decision makers to select the optimal control action via careful contingency planning. PMID:28207777

  20. Beam-specific planning volumes for scattered-proton lung radiotherapy

    NASA Astrophysics Data System (ADS)

    Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.

    2014-08-01

    This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.

  1. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE PAGES

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    2016-10-07

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  2. Measurements of downwelling far-infrared radiance during the RHUBC-II campaign at Cerro Toco, Chile and comparisons with line-by-line radiative transfer calculations

    NASA Astrophysics Data System (ADS)

    Mast, Jeffrey C.; Mlynczak, Martin G.; Cageao, Richard P.; Kratz, David P.; Latvakoski, Harri; Johnson, David G.; Turner, David D.; Mlawer, Eli J.

    2017-09-01

    Downwelling radiances at the Earth's surface measured by the Far-Infrared Spectroscopy of the Troposphere (FIRST) instrument in an environment with integrated precipitable water (IPW) as low as 0.03 cm are compared with calculated spectra in the far-infrared and mid-infrared. FIRST (a Fourier transform spectrometer) was deployed from August through October 2009 at 5.38 km MSL on Cerro Toco, a mountain in the Atacama Desert of Chile. There FIRST took part in the Radiative Heating in Unexplored Bands Campaign Part 2 (RHUBC-II), the goal of which is the assessment of water vapor spectroscopy. Radiosonde water vapor and temperature vertical profiles are input into the Atmospheric and Environmental Research (AER) Line-by-Line Radiative Transfer Model (LBLRTM) to compute modeled radiances. The LBLRTM minus FIRST residual spectrum is calculated to assess agreement. Uncertainties (1-σ) in both the measured and modeled radiances are also determined. Measured and modeled radiances nearly all agree to within combined (total) uncertainties. Features exceeding uncertainties can be corrected into the combined uncertainty by increasing water vapor and model continuum absorption, however this may not be necessary due to 1-σ uncertainties (68% confidence). Furthermore, the uncertainty in the measurement-model residual is very large and no additional information on the adequacy of current water vapor spectral line or continuum absorption parameters may be derived. Similar future experiments in similarly cold and dry environments will require absolute accuracy of 0.1% of a 273 K blackbody in radiance and water vapor accuracy of ∼3% in the profile layers contributing to downwelling radiance at the surface.

  3. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  4. Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes

    NASA Astrophysics Data System (ADS)

    Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.

    2017-11-01

    Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.

  5. Building the Case: Changing Consumer Perceptions of the Value of Expanded Community Pharmacist Services.

    PubMed

    Steckowych, Kathryn; Smith, Marie; Spiggle, Susan; Stevens, Andrew; Li, Hao

    2018-01-01

    The role of the community pharmacist has traditionally been a medication dispenser; however, community pharmacists' responsibilities must expand to include more direct patient care services in order to transform primary care practice. Use case-based scenarios to (1) determine factors that contribute to positive and negative consumer perceptions of expanded community pharmacist patient care roles, (2) identify facilitators and barriers that contribute to consumer perceptions of the value of expanded community pharmacist patient care services, and (3) develop a successful approach and strategies for increasing consumer advocacy for the value of expanded community pharmacist patient care services. Two consumer focus groups used scenario-based guided discussions and Likert scale questionnaires to elicit consumer reactions, facilitators, and barriers to expanded community pharmacist services. Convenience, timeliness, and accessibility were common positive reactions across all 3 scenarios. Team approach to care and trust were viewed as major facilitators. Participant concerns included uncertainty about pharmacist training and qualifications, privacy, pharmacists' limited bandwidth to accept new tasks, and potential increased patient costs. Common barriers to service uptake included a lack of insurance payment and physician preference to provide the services. Consumer unfamiliarity with non-traditional community pharmacist services is likely an influencer of consumers' hesitancy to utilize such services; therefore, an opportunity exists to engage consumers and advocacy organizations in supporting expanded community pharmacist roles. This study can inform consumers, advocates, community pharmacists, primary care providers, and community-based organizations on methods to shape consumer perceptions on the value of community pharmacist expanded services.

  6. Determination of layer-charge characteristics of smectites

    USGS Publications Warehouse

    Christidis, G.E.; Eberl, D.D.

    2003-01-01

    A new method for calculation of layer charge and charge distribution of smectites is proposed. The method is based on comparisons between X-ray diffraction (XRD) patterns of K-saturated, ethylene glycol-solvated, oriented samples and calculated XRD patterns for three-component, mixed-layer systems. For the calculated patterns it is assumed that the measured patterns can be modeled as random interstratifications of fully expanding 17.1 Å layers, partially expanding 13.5 Å layers and non-expanding 9.98 Å layers. The technique was tested using 29 well characterized smectites. According to their XRD patterns, smectites were classified as group 1 (low-charge smectites) and group 2 (high-charge smectites). The boundary between the two groups is at a layer charge of −0.46 equivalents per half unit-cell. Low-charge smectites are dominated by 17.1 Å layers, whereas high-charge smectites contain only 20% fully expandable layers on average. Smectite properties and industrial applications may be dictated by the proportion of 17.1 Å layers present. Non-expanding layers may control the behavior of smectites during weathering, facilitating the formation of illite layers after subsequent cycles of wetting and drying. The precision of the method is better than 3.5% at a layer charge of −0.50; therefore the method should be useful for basic research and for industrial purposes.

  7. Reference Correlation for the Viscosity of Ethane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, Eckhard, E-mail: eckhard.vogel@uni-rostock.de; Span, Roland; Herrmann, Sebastian

    2015-12-15

    A new representation of the viscosity for the fluid phase of ethane includes a zero-density correlation and a contribution for the critical enhancement, initially both developed separately, but based on experimental data. The higher-density contributions are correlated as a function of the reduced density δ = ρ/ρ{sub c} and of the reciprocal reduced temperature τ = T{sub c}/T (ρ{sub c}—critical density and T{sub c}—critical temperature). The final formulation contains 14 coefficients obtained using a state-of-the-art linear optimization algorithm. The evaluation and choice of the selected primary data sets is reviewed, in particular with respect to the assessment used in earliermore » viscosity correlations. The new viscosity surface correlation makes use of the reference equation of state for the thermodynamic properties of ethane by Bücker and Wagner [J. Phys. Chem. Ref. Data 35, 205 (2006)] and is valid in the fluid region from the melting line to temperatures of 675 K and pressures of 100 MPa. The viscosity in the limit of zero density is described with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 290 < T/K < 625, increasing to 1.0% at temperatures down to 212 K. The uncertainty of the correlated values is 1.5% in the range 290 < T/K < 430 at pressures up to 30 MPa on the basis of recent measurements judged to be very reliable as well as 4.0% and 6.0% in further regions. The uncertainty in the near-critical region (1.001 < 1/τ < 1.010 and 0.8 < δ < 1.2) increases with decreasing temperature up to 3.0% considering the available reliable data. Tables of the viscosity calculated from the correlation are listed in an appendix for the single-phase region, for the vapor–liquid phase boundary, and for the near-critical region.« less

  8. All-Dimensional H2–CO Potential: Validation with Fully Quantum Second Virial Coefficients

    PubMed Central

    Garberoglio, Giovanni; Jankowski, Piotr; Szalewicz, Krzysztof; Harvey, Allan H.

    2017-01-01

    We use a new high-accuracy all-dimensional potential to compute the cross second virial coefficient B12(T) between molecular hydrogen and carbon monoxide. The path-integral method is used to fully account for quantum effects. Values are calculated from 10 K to 2000 K and the uncertainty of the potential is propagated into uncertainties of B12. Our calculated B12(T) are in excellent agreement with most of the limited experimental data available, but cover a much wider range of temperatures and have lower uncertainties. Similar to recently reported findings from scattering calculations, we find that the reduced-dimensionality potential obtained by averaging over the rovibrational motion of the monomers gives results that are a good approximation to those obtained when flexibility is fully taken into account. Also, the four-dimensional approximation with monomers taken at their vibrationally averaged bond lengths works well. This finding is important, since full-dimensional potentials are difficult to develop even for triatomic monomers and are not currently possible to obtain for larger molecules. Likewise, most types of accurate quantum mechanical calculations, e.g., spectral or scattering, are severely limited in the number of dimensions that can be handled. PMID:28178790

  9. All-dimensional H2-CO potential: Validation with fully quantum second virial coefficients.

    PubMed

    Garberoglio, Giovanni; Jankowski, Piotr; Szalewicz, Krzysztof; Harvey, Allan H

    2017-02-07

    We use a new high-accuracy all-dimensional potential to compute the cross second virial coefficient B 12 (T) between molecular hydrogen and carbon monoxide. The path-integral method is used to fully account for quantum effects. Values are calculated from 10 K to 2000 K and the uncertainty of the potential is propagated into uncertainties of B 12 . Our calculated B 12 (T) are in excellent agreement with most of the limited experimental data available, but cover a much wider range of temperatures and have lower uncertainties. Similar to recently reported findings from scattering calculations, we find that the reduced-dimensionality potential obtained by averaging over the rovibrational motion of the monomers gives results that are a good approximation to those obtained when flexibility is fully taken into account. Also, the four-dimensional approximation with monomers taken at their vibrationally averaged bond lengths works well. This finding is important, since full-dimensional potentials are difficult to develop even for triatomic monomers and are not currently possible to obtain for larger molecules. Likewise, most types of accurate quantum mechanical calculations, e.g., spectral or scattering, are severely limited in the number of dimensions that can be handled.

  10. Radiation Parameters of High Dose Rate Iridium -192 Sources

    NASA Astrophysics Data System (ADS)

    Podgorsak, Matthew B.

    A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.

  11. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    NASA Astrophysics Data System (ADS)

    Sun, Z. J.; Wells, D.; Segebade, C.; Green, J.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  12. Real Option in Capital Budgeting for SMEs: Insight from Steel Company

    NASA Astrophysics Data System (ADS)

    Muharam, F. M.; Tarrazon, M. A.

    2017-06-01

    Complex components of investment projects can only be analysed accurately if flexibility and comprehensive consideration of uncertainty are incorporated into valuation. Discounted cash flow (DCF) analysis has failed to cope with strategic future alternatives that affect the right value of investment projects. Real option valuation (ROV) proves to be the right tool for this purpose since it enables to calculate the enlarged or strategic Net Present Value (ENPV). This study attempts to provide an insight of the usage of ROV in capital budgeting and investment decision-making processes of SMEs. Exploring into the first stage processing of steel industry, analysis of alternatives to cancel, to expand, to defer or to abandon is performed. Completed with multiple options interaction and a sensitivity analysis, our findings prove that the application of ROV is beneficial for complex investment projects independently from the size of the company and particularly suitable in scenarios with scarce resources. The application of Real Option Valuation (ROV) is plausible and beneficial for SMEs to be incorporated in the strategic decision making process.

  13. Simultaneous determination of fumonisins B1 and B2 in different types of maize by matrix solid phase dispersion and HPLC-MS/MS.

    PubMed

    de Oliveira, Gabriel Barros; de Castro Gomes Vieira, Carolyne Menezes; Orlando, Ricardo Mathias; Faria, Adriana Ferreira

    2017-10-15

    This work involved the optimization and validation of a method, according to Directive 2002/657/EC and the Analytical Quality Assurance Manual of Ministério da Agricultura, Pecuária e Abastecimento, Brazil, for simultaneous extraction and determination of fumonisins B1 and B2 in maize. The extraction procedure was based on a matrix solid phase dispersion approach, the optimization of which employed a sequence of different factorial designs. A liquid chromatography-tandem mass spectrometry method was developed for determining these analytes using the selected reaction monitoring mode. The optimized method employed only 1g of silica gel for dispersion and elution with 70% ammonium formate aqueous buffer (50mmolL -1 , pH 9), representing a simple, cheap and chemically friendly sample preparation method. Trueness (recoveries: 86-106%), precision (RSD ≤19%), decision limits, detection capabilities and measurement uncertainties were calculated for the validated method. The method scope was expanded to popcorn kernels, white maize kernels and yellow maize grits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  15. Measurement of methane emissions from ruminant livestock using a SF[sub 6] tracer technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.; Huyler, M.; Westberg, H.

    1994-02-01

    The purpose of this paper is to describe a method for determining methane emission factors for cattle. The technique involves the direct measurement of methane emissions from livestock in their natural environment. A small permeation tube containing SF[sub 6] is placed in the cow's rumen, and SF[sub 6] and CH[sub 4] concentrations are measured near the mouth and nostrils of the cow. The SF[sub 6] release provides a way to account for the dilution of gases near the animal's mouth. The CH[sub 4] emission rate can be calculated from the known SF[sub 6] emission rate and the measured SF[sub 6]more » and CH[sub 4] concentrations. The tracer method described provides an easy means for acquiring a large methane emissions data base from domestic livestock. The low cost and simplicity should make it possible to monitor a large number of animals in countries throughout the world. An expanded data base of this type helps to reduce uncertainty in the ruminant contribution to the global methane budget. 18 refs., 3 figs., 3 tabs.« less

  16. Covariance propagation in spectral indices

    DOE PAGES

    Griffin, P. J.

    2015-01-09

    In this study, the dosimetry community has a history of using spectral indices to support neutron spectrum characterization and cross section validation efforts. An important aspect to this type of analysis is the proper consideration of the contribution of the spectrum uncertainty to the total uncertainty in calculated spectral indices (SIs). This study identifies deficiencies in the traditional treatment of the SI uncertainty, provides simple bounds to the spectral component in the SI uncertainty estimates, verifies that these estimates are reflected in actual applications, details a methodology that rigorously captures the spectral contribution to the uncertainty in the SI, andmore » provides quantified examples that demonstrate the importance of the proper treatment the spectral contribution to the uncertainty in the SI.« less

  17. Deriving proper measurement uncertainty from Internal Quality Control data: An impossible mission?

    PubMed

    Ceriotti, Ferruccio

    2018-03-30

    Measurement uncertainty (MU) is a "non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used". In the clinical laboratory the most convenient way to calculate MU is the "top down" approach based on the use of Internal Quality Control data. As indicated in the definition, MU depends on the information used for its calculation and so different estimates of MU can be obtained. The most problematic aspect is how to deal with bias. In fact bias is difficult to detect and quantify and it should be corrected including only the uncertainty derived from this correction. Several approaches to calculate MU starting from Internal Quality Control data are presented. The minimum requirement is to use only the intermediate precision data, provided to include 6 months of results obtained with a commutable quality control material at a concentration close to the clinical decision limit. This approach is the minimal requirement and it is convenient for all those measurands that are especially used for monitoring or where a reference measurement system does not exist and so a reference for calculating the bias is lacking. Other formulas including the uncertainty of the value of the calibrator, including the bias from a commutable certified reference material or from a material specifically prepared for trueness verification, including the bias derived from External Quality Assessment schemes or from historical mean of the laboratory are presented and commented. MU is an important parameter, but a single, agreed upon way to calculate it in a clinical laboratory is not yet available. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Petroleum refinery operational planning using robust optimization

    NASA Astrophysics Data System (ADS)

    Leiras, A.; Hamacher, S.; Elkamel, A.

    2010-12-01

    In this article, the robust optimization methodology is applied to deal with uncertainties in the prices of saleable products, operating costs, product demand, and product yield in the context of refinery operational planning. A numerical study demonstrates the effectiveness of the proposed robust approach. The benefits of incorporating uncertainty in the different model parameters were evaluated in terms of the cost of ignoring uncertainty in the problem. The calculations suggest that this benefit is equivalent to 7.47% of the deterministic solution value, which indicates that the robust model may offer advantages to those involved with refinery operational planning. In addition, the probability bounds of constraint violation are calculated to help the decision-maker adopt a more appropriate parameter to control robustness and judge the tradeoff between conservatism and total profit.

  19. Uncertainties in nuclear transition matrix elements for neutrinoless {beta}{beta} decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, P. K.

    Uncertainties in nuclear transition matrix elements M{sup (0{nu})} and M{sub N}{sup (0{nu})} due to the exchange of light and heavy Majorana neutrinos, respectively have been estimated by calculating sets of twelve nuclear transition matrix elements for the neutrinoless {beta}{beta} decay of {sup 94,96}Zr, {sup 98,100}Mo, {sup 104}Ru, {sup 110}Pd, {sup 128,130}Te and {sup 150}Nd isotopes in the case of 0{sup +}{yields}0{sup +} transition by considering four different parameterizations of a Hamiltonian with pairing plus multipolar effective two-body interaction and three different parameterizations of Jastrow short range correlations. Exclusion of nuclear transition matrix elements calculated with the Miller-Spencer parametrization reduces themore » uncertainties by 10%-15%.« less

  20. Sensitivity analysis of TRX-2 lattice parameters with emphasis on epithermal /sup 238/U capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.

    1977-03-01

    The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less

  1. Sensitivity of a radiative transfer model to the uncertainty in the aerosol optical depth used as input

    NASA Astrophysics Data System (ADS)

    Román, Roberto; Bilbao, Julia; de Miguel, Argimiro; Pérez-Burgos, Ana

    2014-05-01

    The radiative transfer models can be used to obtain solar radiative quantities in the Earth surface as the erythemal ultraviolet (UVER) irradiance, which is the spectral irradiance weighted with the erythemal (sunburn) action spectrum, and the total shortwave irradiance (SW; 305-2,8000 nm). Aerosol and atmospheric properties are necessary as inputs in the model in order to calculate the UVER and SW irradiances under cloudless conditions, however the uncertainty in these inputs causes another uncertainty in the simulations. The objective of this work is to quantify the uncertainty in UVER and SW simulations generated by the aerosol optical depth (AOD) uncertainty. The data from different satellite retrievals were downloaded at nine Spanish places located in the Iberian Peninsula: Total ozone column from different databases, spectral surface albedo and water vapour column from MODIS instrument, AOD at 443 nm and Angström Exponent (between 443 nm and 670 nm) from MISR instrument onboard Terra satellite, single scattering albedo from OMI instrument onboard Aura satellite. The obtained AOD at 443 nm data from MISR were compared with AERONET measurements in six Spanish sites finding an uncertainty in the AOD from MISR of 0.074. In this work the radiative transfer model UVSPEC/Libradtran (1.7 version) was used to obtain the SW and UVER irradiance under cloudless conditions for each month and for different solar zenith angles (SZA) in the nine mentioned locations. The inputs used for these simulations were monthly climatology tables obtained with the available data in each location. Once obtained the UVER and SW simulations, they were repeated twice but changing the AOD monthly values by the same AOD plus/minus its uncertainty. The maximum difference between the irradiance run with AOD and the irradiance run with AOD plus/minus its uncertainty was calculated for each month, SZA, and location. This difference was considered as the uncertainty on the model caused by the AOD uncertainty. The uncertainty in the simulated global SW and UVER varies with the location, but the behaviour is similar: high uncertainty in specific months. The averages of the uncertainty at the nine locations were calculated. Uncertainty in the global SW is lower than 5% for SZA values lower than 70º, and the uncertainty in global UVER is between 2 and 6%. The uncertainty in the direct and diffuse components is higher than in the global case for both SW and UVER irradiances, but a balance between the changes with AOD in direct and diffuse components provide a lower uncertainty in global SW and UVER irradiance. References Bilbao, J., Román, R., de Miguel, A., Mateos, D.: Long-term solar erythemal UV irradiance data reconstruction in Spain using a semiempirical method, J. Geophys. Res., 116, D22211, 2011. Kylling, A., Stamnes, K., Tsay, S. C.: A reliable and efficient two-stream algorithm for spherical radiative transfer: Documentation of acciracy in realistic layered media, J. Atmos. Chem, 21, 115-150, 1995. Ricchiazzi, P., Yang, S., Gautier, C., Sowle, D.: SBDART: A research and Teaching software tool for plane-parallel radiative transfer in the Earth's atmosphere, Bulletin of the American Meteorological

  2. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  3. Evaluation of the ²³⁹Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE PAGES

    Neudecker, D.; Talou, P.; Kawano, T.; ...

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k eff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k eff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less

  4. Evaluation of the 239 Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neudecker, D.; Talou, P.; Kawano, T.

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of (PU)-P-239 induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talon et al. (2010), surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data These improvements lead to changes in the evaluated PENS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented. which lead to more reasonable evaluated uncertainties. The calculated k(eff) of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k(eff) one standard deviations overlap with some of those obtained using ENDF/B-VILl, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,) and (n,f) reactions, and show improvements for highenergy threshold (n,2n) reactions compared to ENDF/B-VII.l. (C) 2015 Elsevier B.V. All rights reserved.« less

  5. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    PubMed

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Calculation of the detection limits for radionuclides identified in gamma-ray spectra based on post-processing peak analysis results.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2018-03-01

    A new method for calculating the detection limits of gamma-ray spectrometry measurements is presented. The method is applicable for gamma-ray emitters, irrespective of the influences of the peaked background, the origin of the background and the overlap with other peaks. It offers the opportunity for multi-gamma-ray emitters to calculate the common detection limit, corresponding to more peaks. The detection limit is calculated by approximating the dependence of the uncertainty in the indication on its value with a second-order polynomial. In this approach the relation between the input quantities and the detection limit are described by an explicit expression and can be easy investigated. The detection limit is calculated from the data usually provided by the reports of peak-analyzing programs: the peak areas and their uncertainties. As a result, the need to use individual channel contents for calculating the detection limit is bypassed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Neutron Thermal Cross Sections, Westcott Factors, Resonance Integrals, Maxwellian Averaged Cross Sections and Astrophysical Reaction Rates Calculated from the ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0, ROSFOND-2010, CENDL-3.1 and EAF-2010 Evaluated Data Libraries

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Mughabghab, S. F.

    2012-12-01

    We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.

  8. Prediction uncertainty and data worth assessment for groundwater transport times in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.

    2018-06-01

    Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.

  9. Uncertainty Estimate of Surface Irradiances Computed with MODIS-, CALIPSO-, and CloudSat-Derived Cloud and Aerosol Properties

    NASA Astrophysics Data System (ADS)

    Kato, Seiji; Loeb, Norman G.; Rutan, David A.; Rose, Fred G.; Sun-Mack, Sunny; Miller, Walter F.; Chen, Yan

    2012-07-01

    Differences of modeled surface upward and downward longwave and shortwave irradiances are calculated using modeled irradiance computed with active sensor-derived and passive sensor-derived cloud and aerosol properties. The irradiance differences are calculated for various temporal and spatial scales, monthly gridded, monthly zonal, monthly global, and annual global. Using the irradiance differences, the uncertainty of surface irradiances is estimated. The uncertainty (1σ) of the annual global surface downward longwave and shortwave is, respectively, 7 W m-2 (out of 345 W m-2) and 4 W m-2 (out of 192 W m-2), after known bias errors are removed. Similarly, the uncertainty of the annual global surface upward longwave and shortwave is, respectively, 3 W m-2 (out of 398 W m-2) and 3 W m-2 (out of 23 W m-2). The uncertainty is for modeled irradiances computed using cloud properties derived from imagers on a sun-synchronous orbit that covers the globe every day (e.g., moderate-resolution imaging spectrometer) or modeled irradiances computed for nadir view only active sensors on a sun-synchronous orbit such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation and CloudSat. If we assume that longwave and shortwave uncertainties are independent of each other, but up- and downward components are correlated with each other, the uncertainty in global annual mean net surface irradiance is 12 W m-2. One-sigma uncertainty bounds of the satellite-based net surface irradiance are 106 W m-2 and 130 W m-2.

  10. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  11. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-01

    In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.

  12. Development of a Portable Torque Wrench Tester

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhang, Q.; Gou, C.; Su, D.

    2018-03-01

    A portable torque wrench tester (PTWT) with calibration range from 0.5 Nm to 60 Nm has been developed and evaluated for periodic or on-site calibration of setting type torque wrenches, indicating type torque wrenches and hand torque screwdrivers. The PTWT is easy to carry with weight about 10 kg, simple and efficient operation and energy saving with an automatic loading and calibrating system. The relative expanded uncertainty of torque realized by the PTWT was estimated to be 0.8%, with the coverage factor k=2. A comparison experiment has been done between the PTWT and a reference torque standard at our laboratory. The consistency between these two devices under the claimed uncertainties was verified.

  13. Improved MICROBASE Product with Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Meng

    The data set contains four primary microphysics, including liquid water content, ice water content, liquid effective radius, and ice effective radius. Bit QC and data quality QC are also calculated. Quantification of uncertainties (incorporating the work of Zhao et al. 2013) are included for all four microphysics.

  14. Measurement-based climatology of aerosol direct radiative effect, its sensitivities, and uncertainties from a background southeast US site

    NASA Astrophysics Data System (ADS)

    Sherman, James P.; McComiskey, Allison

    2018-03-01

    Aerosol optical properties measured at Appalachian State University's co-located NASA AERONET and NOAA ESRL aerosol network monitoring sites over a nearly four-year period (June 2012-Feb 2016) are used, along with satellite-based surface reflectance measurements, to study the seasonal variability of diurnally averaged clear sky aerosol direct radiative effect (DRE) and radiative efficiency (RE) at the top-of-atmosphere (TOA) and at the surface. Aerosol chemistry and loading at the Appalachian State site are likely representative of the background southeast US (SE US), home to high summertime aerosol loading and one of only a few regions not to have warmed during the 20th century. This study is the first multi-year ground truth DRE study in the SE US, using aerosol network data products that are often used to validate satellite-based aerosol retrievals. The study is also the first in the SE US to quantify DRE uncertainties and sensitivities to aerosol optical properties and surface reflectance, including their seasonal dependence.Median DRE for the study period is -2.9 W m-2 at the TOA and -6.1 W m-2 at the surface. Monthly median and monthly mean DRE at the TOA (surface) are -1 to -2 W m-2 (-2 to -3 W m-2) during winter months and -5 to -6 W m-2 (-10 W m-2) during summer months. The DRE cycles follow the annual cycle of aerosol optical depth (AOD), which is 9 to 10 times larger in summer than in winter. Aerosol RE is anti-correlated with DRE, with winter values 1.5 to 2 times more negative than summer values. Due to the large seasonal dependence of aerosol DRE and RE, we quantify the sensitivity of DRE to aerosol optical properties and surface reflectance, using a calendar day representative of each season (21 December for winter; 21 March for spring, 21 June for summer, and 21 September for fall). We use these sensitivities along with measurement uncertainties of aerosol optical properties and surface reflectance to calculate DRE uncertainties. We also estimate uncertainty in calculated diurnally-averaged DRE due to diurnal aerosol variability. Aerosol DRE at both the TOA and surface is most sensitive to changes in AOD, followed by single-scattering albedo (ω0). One exception is under the high summertime aerosol loading conditions (AOD ≥ 0.15 at 550 nm), when sensitivity of TOA DRE to ω0 is comparable to that of AOD. Aerosol DRE is less sensitive to changes in scattering asymmetry parameter (g) and surface reflectance (R). While DRE sensitivity to AOD varies by only ˜ 25 to 30 % with season, DRE sensitivity to ω0, g, and R largely follow the annual AOD cycle at APP, varying by factors of 8 to 15 with season. Since the measurement uncertainties of AOD, ω0, g, and R are comparable at Appalachian State, their relative contributions to DRE uncertainty are largely influenced by their (seasonally dependent) DRE sensitivity values, which suggests that the seasonal dependence of DRE uncertainty must be accounted for. Clear sky aerosol DRE uncertainty at the TOA (surface) due to measurement uncertainties ranges from 0.45 (0.75 W m-2) for December to 1.1 (1.6 W m-2) for June. Expressed as a fraction of DRE computed using monthly median aerosol optical properties and surface reflectance, the DRE uncertainties at TOA (surface) are 20 to 24 % (15 to 22 %) for March, June, and September and 49 (50 %) for DEC. The relatively low DRE uncertainties are largely due to the low uncertainty in AOD measured by AERONET. Use of satellite-based AOD measurements by MODIS in the DRE calculations increases DRE uncertainties by a factor of 2 to 5 and DRE uncertainties are dominated by AOD uncertainty for all seasons. Diurnal variability in AOD (and to a lesser extent g) contributes to uncertainties in DRE calculated using daily-averaged aerosol optical properties that are slightly larger (by ˜ 20 to 30 %) than DRE uncertainties due to measurement uncertainties during summer and fall, with comparable uncertainties during winter and spring.

  15. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  16. Evaluation and attribution of OCO-2 XCO2 uncertainties

    NASA Astrophysics Data System (ADS)

    Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin

    2017-07-01

    Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.

  17. Cloud fraction at the ARM SGP site: reducing uncertainty with self-organizing maps

    NASA Astrophysics Data System (ADS)

    Kennedy, Aaron D.; Dong, Xiquan; Xi, Baike

    2016-04-01

    Instrument downtime leads to uncertainty in the monthly and annual record of cloud fraction (CF), making it difficult to perform time series analyses of cloud properties and perform detailed evaluations of model simulations. As cloud occurrence is partially controlled by the large-scale atmospheric environment, this knowledge is used to reduce uncertainties in the instrument record. Synoptic patterns diagnosed from the North American Regional Reanalysis (NARR) during the period 1997-2010 are classified using a competitive neural network known as the self-organizing map (SOM). The classified synoptic states are then compared to the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) instrument record to determine the expected CF. A number of SOMs are tested to understand how the number of classes and the period of classifications impact the relationship between classified states and CFs. Bootstrapping is utilized to quantify the uncertainty of the instrument record when statistical information from the SOM is included. Although all SOMs significantly reduce the uncertainty of the CF record calculated in Kennedy et al. (Theor Appl Climatol 115:91-105, 2014), SOMs with a large number of classes and separated by month are required to produce the lowest uncertainty and best agreement with the annual cycle of CF. This result may be due to a manifestation of seasonally dependent biases in NARR. With use of the SOMs, the average uncertainty in monthly CF is reduced in half from the values calculated in Kennedy et al. (Theor Appl Climatol 115:91-105, 2014).

  18. Decision making from economic and signal detection perspectives: development of an integrated framework

    PubMed Central

    Lynn, Spencer K.; Wormwood, Jolie B.; Barrett, Lisa F.; Quigley, Karen S.

    2015-01-01

    Behavior is comprised of decisions made from moment to moment (i.e., to respond one way or another). Often, the decision maker cannot be certain of the value to be accrued from the decision (i.e., the outcome value). Decisions made under outcome value uncertainty form the basis of the economic framework of decision making. Behavior is also based on perception—perception of the external physical world and of the internal bodily milieu, which both provide cues that guide decision making. These perceptual signals are also often uncertain: another person's scowling facial expression may indicate threat or intense concentration, alternatives that require different responses from the perceiver. Decisions made under perceptual uncertainty form the basis of the signals framework of decision making. Traditional behavioral economic approaches to decision making focus on the uncertainty that comes from variability in possible outcome values, and typically ignore the influence of perceptual uncertainty. Conversely, traditional signal detection approaches to decision making focus on the uncertainty that arises from variability in perceptual signals and typically ignore the influence of outcome value uncertainty. Here, we compare and contrast the economic and signals frameworks that guide research in decision making, with the aim of promoting their integration. We show that an integrated framework can expand our ability to understand a wider variety of decision-making behaviors, in particular the complexly determined real-world decisions we all make every day. PMID:26217275

  19. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  20. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  1. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  2. A novel approach to evaluate soil heat flux calculation: An analytical review of nine methods: Soil Heat Flux Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhongming; Russell, Eric S.; Missik, Justine E. C.

    We evaluated nine methods of soil heat flux calculation using field observations. All nine methods underestimated the soil heat flux by at least 19%. This large underestimation is mainly caused by uncertainties in soil thermal properties.

  3. Mathematical modeling of a survey-meter used to measure radioactivity in human thyroids: Monte Carlo calculations of the device response and uncertainties

    PubMed Central

    Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André

    2012-01-01

    This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289

  4. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  5. Initial Risk Analysis and Decision Making Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.

    2012-02-01

    Commercialization of new carbon capture simulation initiative (CCSI) technology will include two key elements of risk management, namely, technical risk (will process and plant performance be effective, safe, and reliable) and enterprise risk (can project losses and costs be controlled within the constraints of market demand to maintain profitability and investor confidence). Both of these elements of risk are incorporated into the risk analysis subtask of Task 7. Thus far, this subtask has developed a prototype demonstration tool that quantifies risk based on the expected profitability of expenditures when retrofitting carbon capture technology on a stylized 650 MW pulverized coalmore » electric power generator. The prototype is based on the selection of specific technical and financial factors believed to be important determinants of the expected profitability of carbon capture, subject to uncertainty. The uncertainty surrounding the technical performance and financial variables selected thus far is propagated in a model that calculates the expected profitability of investments in carbon capture and measures risk in terms of variability in expected net returns from these investments. Given the preliminary nature of the results of this prototype, additional work is required to expand the scope of the model to include additional risk factors, additional information on extant and proposed risk factors, the results of a qualitative risk factor elicitation process, and feedback from utilities and other interested parties involved in the carbon capture project. Additional information on proposed distributions of these risk factors will be integrated into a commercial implementation framework for the purpose of a comparative technology investment analysis.« less

  6. Experimental validation of photon-heating calculation for the Jules Horowitz Reactor

    NASA Astrophysics Data System (ADS)

    Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.; Reynard-Carette, C.; Di Salvo, J.; Gruel, A.

    2015-04-01

    The Jules Horowitz Reactor (JHR) is the next Material-Testing Reactor (MTR) under construction at CEA Cadarache. High values of photon heating (up to 20 W/g) are expected in this MTR. As temperature is a key parameter for material behavior, the accuracy of photon-heating calculation in the different JHR structures is an important stake with regard to JHR safety and performances. In order to experimentally validate the calculation of photon heating in the JHR, an integral experiment called AMMON was carried out in the critical mock-up EOLE at CEA Cadarache to help ascertain the calculation bias and its associated uncertainty. Nuclear heating was measured in different JHR-representative AMMON core configurations using ThermoLuminescent Detectors (TLDs) and Optically Stimulated Luminescent Detectors (OSLDs). This article presents the interpretation methodology and the calculation/experiment (C/E) ratio for all the TLD and OSLD measurements conducted in AMMON. It then deals with representativeness elements of the AMMON experiment regarding the JHR and establishes the calculation biases (and its associated uncertainty) applicable to photon-heating calculation for the JHR.

  7. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    USDA-ARS?s Scientific Manuscript database

    Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...

  8. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  9. Limitations of analytical dose calculations for small field proton radiosurgery.

    PubMed

    Geng, Changran; Daartz, Juliane; Lam-Tin-Cheung, Kimberley; Bussiere, Marc; Shih, Helen A; Paganetti, Harald; Schuemann, Jan

    2017-01-07

    The purpose of the work was to evaluate the dosimetric uncertainties of an analytical dose calculation engine and the impact on treatment plans using small fields in intracranial proton stereotactic radiosurgery (PSRS) for a gantry based double scattering system. 50 patients were evaluated including 10 patients for each of 5 diagnostic indications of: arteriovenous malformation (AVM), acoustic neuroma (AN), meningioma (MGM), metastasis (METS), and pituitary adenoma (PIT). Treatment plans followed standard prescription and optimization procedures for PSRS. We performed comparisons between delivered dose distributions, determined by Monte Carlo (MC) simulations, and those calculated with the analytical dose calculation algorithm (ADC) used in our current treatment planning system in terms of dose volume histogram parameters and beam range distributions. Results show that the difference in the dose to 95% of the target (D95) is within 6% when applying measured field size output corrections for AN, MGM, and PIT. However, for AVM and METS, the differences can be as great as 10% and 12%, respectively. Normalizing the MC dose to the ADC dose based on the dose of voxels in a central area of the target reduces the difference of the D95 to within 6% for all sites. The generally applied margin to cover uncertainties in range (3.5% of the prescribed range  +  1 mm) is not sufficient to cover the range uncertainty for ADC in all cases, especially for patients with high tissue heterogeneity. The root mean square of the R90 difference, the difference in the position of distal falloff to 90% of the prescribed dose, is affected by several factors, especially the patient geometry heterogeneity, modulation and field diameter. In conclusion, implementation of Monte Carlo dose calculation techniques into the clinic can reduce the uncertainty of the target dose for proton stereotactic radiosurgery. If MC is not available for treatment planning, using MC dose distributions to adjust the delivered doses level can also reduce uncertainties below 3% for mean target dose and 6% for the D95.

  10. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  11. A Probabilistic Approach to Quantify the Impact of Uncertainty Propagation in Musculoskeletal Simulations

    PubMed Central

    Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.

    2015-01-01

    Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535

  12. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    PubMed

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  13. Quantum corrections to newtonian potential and generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Scardigli, Fabio; Lambiase, Gaetano; Vagenas, Elias

    2017-08-01

    We use the leading quantum corrections to the newtonian potential to compute the deformation parameter of the generalized uncertainty principle. By assuming just only General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum, our calculation gives, to first order, a specific numerical result. We briefly discuss the physical meaning of this value, and compare it with the previously obtained bounds on the generalized uncertainty principle deformation parameter.

  14. Optical Model and Cross Section Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman,M.W.; Pigni, M.T.; Dietrich, F.S.

    2009-10-05

    Distinct minima and maxima in the neutron total cross section uncertainties were observed in model calculations using spherical optical potential. We found this oscillating structure to be a general feature of quantum mechanical wave scattering. Specifically, we analyzed neutron interaction with 56Fe from 1 keV up to 65 MeV, and investigated physical origin of the minima.We discuss their potential importance for practical applications as well as the implications for the uncertainties in total and absorption cross sections.

  15. Independent Qualification of the CIAU Tool Based on the Uncertainty Estimate in the Prediction of Angra 1 NPP Inadvertent Load Rejection Transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.

    2002-07-01

    The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less

  16. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  17. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, M; Lee, V; Wong, M

    Purpose: Following the method of in-phantom measurements of reference air kerma rate (Ka) at 100cm and absorbed water dose rate (Dw1) at 1cm of high-dose-rate 192Ir brachytherapy source using 60Co absorbed-dose-to-water calibrated (ND,w,60Co) ionization chamber (IC), we experimentally determined the in-phantom correction factors (kglob) of the PTW30013 (PTW, Freiburg, Germany) IC by comparing the Monte Carlo (MC)-calculated kglob of the other PTW30016 IC. Methods: The Dw1 formalism of in-phantom measurement is: M*ND,w,60Co*(kglob)Dw1, where M is the collected charges, and (kglob)Dw1 the in-phantom Dw1 correction factor. Similarly, Ka is determined by M*ND,w,60Co*(kglob)ka, where (kglob)ka the in-phantom Ka correction factor. Two thimblemore » ICs PTW30013 and another PTW30016 having a ND,w,60Co from the German primary standard laboratory (PTB) were simultaneously exposed to the microselectron 192Ir v2 source at 8cm in a PMMA phantom. A reference well chamber (PTW33004) with a PTB transfer Ka calibration Nka was used for comparing the in-phantom measurements to derive the experimental (kglob)ka factors. We determined the experimental (kglob)Dw1 of the PTW30013 by comparing the PTW30016 measurements with MC-calculated (kglob)Dw1. Results: Ka results of the PTW30016 based on ND,w,60Co and MC-calculated (kglob)ka differ from the well chamber results based on Nka by 1.6% and from the manufacturer by 1.0%. Experimental (kglob)ka factors for the PTW30016 and two other PTW30013 are 0.00683, 0.00681 and 0.00679, and vary <0.5% with 1mm source positioning uncertainty. Experimental (kglob)Dw1 of the PTW30013 ICs are 75.3 and 75.6, and differ by 1.6% from the conversion by dose rate constant from the AAPM report 229. Conclusion: The 1.7% difference between MC and experimental (kglob)ka for the PTW30016 IC is within the PTB 2.5% expanded uncertainty in Ka calibration standard. Using a single IC with ND,w,60Co to calibrate the brachytherapy source and dose output in external radiotherapy is feasible. MC validation of the PTW30013(kglob)Dw1 is warranted.« less

  19. Re-approaching global iodine emissions: A novel parameterisation for sea-surface iodide concentrations using a machine learning approach

    NASA Astrophysics Data System (ADS)

    Sherwen, T.; Evans, M. J.; Chance, R.; Tinel, L.; Carpenter, L.

    2017-12-01

    Halogens (Cl, Br, I) in the troposphere have been shown to play a profound role in determining the concentrations of ozone and OH. Iodine, which is essentially oceanic in source, exerts its largest impacts on composition in both the marine boundary layer, and in the upper troposphere. This chemistry has only recently been implemented into global models and significant uncertainties remain, particularly regarding the magnitude of iodine emissions. Iodine emissions are dominated by the inorganic oxidation of iodide in the sea surface by ozone, which leads to release of gaseous inorganic iodine (HOI, I2). Critical for calculation of these fluxes is the sea-surface concentration of iodide, which is poorly constrained by observations. Previous parameterizations for sea-surface iodide concentration have focused on simple regressive relationships with sea surface temperature and another single oceanographic variables. This leads to differences in iodine fluxes of approximately a factor of two, and leads to substantial differences in the modelled impact of iodine on atmospheric composition. Here we use an expanded dataset of oceanic iodide observations, which incorporates new data that has been targeted at areas with poor coverage previously. A novel approach of multivariate machine learning techniques is applied to this expanded dataset to generate a model that yields improved estimates of the global sea surface iodide distribution. We then use a global chemical transport model (GEOS-Chem) to explore the impact of this new parameterisation on the atmospheric budget of iodine and its impact on tropospheric composition.

  20. Taking Control: The Efficacy and Durability of a Peer-Led Uncertainty Management Intervention for People Recently Diagnosed With HIV.

    PubMed

    Brashers, Dale E; Basinger, Erin D; Rintamaki, Lance S; Caughlin, John P; Para, Michael

    2017-01-01

    HIV creates substantial uncertainty for people infected with the virus, which subsequently affects a host of psychosocial outcomes critical to successful management of the disease. This study assessed the efficacy and durability of a theoretically driven, one-on-one peer support intervention designed to facilitate uncertainty management and enhance psychosocial functioning for patients newly diagnosed with HIV. Using a pretest-posttest control group design, 98 participants received information and training in specific communication strategies (e.g., disclosing to friends and family, eliciting social support, talking to health care providers, using the Internet to gather information, and building social networks through AIDS service organizations). Participants in the experimental group attended six 1-hour sessions, whereas control participants received standard of care for 12 months (after which they received the intervention). Over time, participants in the intervention fared significantly better regarding (a) illness uncertainty, (b) depression, and (c) satisfaction with social support than did those in the control group. Given the utility and cost-effectiveness of this intervention and the uncertainty of a multitude of medical diagnoses and disease experiences, further work is indicated to determine how this program could be expanded to other illnesses and to address related factors, such as treatment adherence and clinical outcomes.

  1. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  2. Uncertainty Evaluation of the New Setup for Measurement of Water-Vapor Permeation Rate by a Dew-Point Sensor

    NASA Astrophysics Data System (ADS)

    Hudoklin, D.; Šetina, J.; Drnovšek, J.

    2012-09-01

    The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).

  3. Validation of a particle tracking analysis method for the size determination of nano- and microparticles

    NASA Astrophysics Data System (ADS)

    Kestens, Vikram; Bozatzidis, Vassili; De Temmerman, Pieter-Jan; Ramaye, Yannic; Roebben, Gert

    2017-08-01

    Particle tracking analysis (PTA) is an emerging technique suitable for size analysis of particles with external dimensions in the nano- and sub-micrometre scale range. Only limited attempts have so far been made to investigate and quantify the performance of the PTA method for particle size analysis. This article presents the results of a validation study during which selected colloidal silica and polystyrene latex reference materials with particle sizes in the range of 20 nm to 200 nm were analysed with NS500 and LM10-HSBF NanoSight instruments and video analysis software NTA 2.3 and NTA 3.0. Key performance characteristics such as working range, linearity, limit of detection, limit of quantification, sensitivity, robustness, precision and trueness were examined according to recommendations proposed by EURACHEM. A model for measurement uncertainty estimation following the principles described in ISO/IEC Guide 98-3 was used for quantifying random and systematic variations. For nominal 50 nm and 100 nm polystyrene and a nominal 80 nm silica reference materials, the relative expanded measurement uncertainties for the three measurands of interest, being the mode, median and arithmetic mean of the number-weighted particle size distribution, varied from about 10% to 12%. For the nominal 50 nm polystyrene material, the relative expanded uncertainty of the arithmetic mean of the particle size distributions increased up to 18% which was due to the presence of agglomerates. Data analysis was performed with software NTA 2.3 and NTA 3.0. The latter showed to be superior in terms of sensitivity and resolution.

  4. SU-E-T-622: Planning Technique for Passively-Scattered Involved-Node Proton Therapy of Mediastinal Lymphoma with Consideration of Cardiac Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flampouri, S; Li, Z; Hoppe, B

    2015-06-15

    Purpose: To develop a treatment planning method for passively-scattered involved-node proton therapy of mediastinal lymphoma robust to breathing and cardiac motions. Methods: Beam-specific planning treatment volumes (bsPTV) are calculated for each proton field to incorporate pertinent uncertainties. Geometric margins are added laterally to each beam while margins for range uncertainty due to setup errors, breathing, and calibration curve uncertainties are added along each beam. The calculation of breathing motion and deformation effects on proton range includes all 4DCT phases. The anisotropic water equivalent margins are translated to distances on average 4DCT. Treatment plans are designed so each beam adequately coversmore » the corresponding bsPTV. For targets close to the heart, cardiac motion effects on dosemaps are estimated by using a library of anonymous ECG-gated cardiac CTs (cCT). The cCT, originally contrast-enhanced, are partially overridden to allow meaningful proton dose calculations. Targets similar to the treatment targets are drawn on one or more cCT sets matching the anatomy of the patient. Plans based on the average cCT are calculated on individual phases, then deformed to the average and accumulated. When clinically significant dose discrepancies occur between planned and accumulated doses, the patient plan is modified to reduce the cardiac motion effects. Results: We found that bsPTVs as planning targets create dose distributions similar to the conventional proton planning distributions, while they are a valuable tool for visualization of the uncertainties. For large targets with variability in motion and depth, integral dose was reduced because of the anisotropic margins. In most cases, heart motion has a clinically insignificant effect on target coverage. Conclusion: A treatment planning method was developed and used for proton therapy of mediastinal lymphoma. The technique incorporates bsPTVs compensating for all common sources of uncertainties and estimation of the effects of cardiac motion not commonly performed.« less

  5. SU-F-T-185: Study of the Robustness of a Proton Arc Technique Based On PBS Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Zheng, Y

    Purpose: One potential technique to realize proton arc is through using PBS beams from many directions to form overlaid Bragg peak (OBP) spots and placing these OBP spots throughout the target volume to achieve desired dose distribution. In this study, we analyzed the robustness of this proton arc technique. Methods: We used a cylindrical water phantom of 20 cm in radius in our robustness analysis. To study the range uncertainty effect, we changed the density of the phantom by ±3%. To study the setup uncertainty effect, we shifted the phantom by 3 & 5 mm. We also combined the rangemore » and setup uncertainties (3mm/±3%). For each test plan, we performed dose calculation for the nominal and 6 disturbed scenarios. Two test plans were used, one with single OBP spot and the other consisting of 121 OBP spots covering a 10×10cm{sup 2} area. We compared the dose profiles between the nominal and disturbed scenarios to estimate the impact of the uncertainties. Dose calculation was performed with Gate/GEANT based Monte Carlo software in cloud computing environment. Results: For each of the 7 scenarios, we simulated 100k & 10M events for plans consisting of single OBP spot and 121 OBP spots respectively. For single OBP spot, the setup uncertainty had minimum impact on the spot’s dose profile while range uncertainty had significant impact on the dose profile. For plan consisting of 121 OBP spots, similar effect was observed but the extent of disturbance was much less compared to single OBP spot. Conclusion: For PBS arc technique, range uncertainty has significantly more impact than setup uncertainty. Although single OBP spot can be severely disturbed by the range uncertainty, the overall effect is much less when a large number of OBP spots are used. Robustness optimization for PBS arc technique should consider range uncertainty with priority.« less

  6. Prediction of the Effective Thermal Conductivity of Powder Insulation

    NASA Astrophysics Data System (ADS)

    Jin, Lingxue; Park, Jiho; Lee, Cheonkyu; Jeong, Sangkwon

    The powder insulation method is widely used in structural and cryogenic systems such as transportation and storage tanks of cryogenic fluids. The powder insulation layer is constructed by small particle powder with light weight and some residual gas with high porosity. So far, many experiments have been carried out to test the thermal performance of various kinds of powder, including expanded perlite, glass microspheres, expanded polystyrene (EPS). However, it is still difficult to predict the thermal performance of powder insulation by calculation due to the complicated geometries, including various particle shapes, wide powder diameter distribution, and various pore sizes. In this paper, the effective thermal conductivity of powder insulation has been predicted based on an effective thermal conductivity calculationmodel of porous packed beds. The calculation methodology was applied to the insulation system with expanded perlite, glass microspheres and EPS beads at cryogenic temperature and various vacuum pressures. The calculation results were compared with previous experimental data. Moreover, additional tests were carried out at cryogenic temperature in this research. The fitting equations of the deformation factor of the area-contact model are presented for various powders. The calculation results show agood agreement with the experimental results.

  7. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  8. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    NASA Astrophysics Data System (ADS)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  9. The Variable Grid Method, an Approach for the Simultaneous Visualization and Assessment of Spatial Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Rose, K.; Glosser, D.; Bauer, J. R.; Barkhurst, A.

    2015-12-01

    The products of spatial analyses that leverage the interpolation of sparse, point data to represent continuous phenomena are often presented without clear explanations of the uncertainty associated with the interpolated values. As a result, there is frequently insufficient information provided to effectively support advanced computational analyses and individual research and policy decisions utilizing these results. This highlights the need for a reliable approach capable of quantitatively producing and communicating spatial data analyses and their inherent uncertainties for a broad range of uses. To address this need, we have developed the Variable Grid Method (VGM), and associated Python tool, which is a flexible approach that can be applied to a variety of analyses and use case scenarios where users need a method to effectively study, evaluate, and analyze spatial trends and patterns while communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations, etc. We will present examples of our research utilizing the VGM to quantify key spatial trends and patterns for subsurface data interpolations and their uncertainties and leverage these results to evaluate storage estimates and potential impacts associated with underground injection for CO2 storage and unconventional resource production and development. The insights provided by these examples identify how the VGM can provide critical information about the relationship between uncertainty and spatial data that is necessary to better support their use in advance computation analyses and informing research, management and policy decisions.

  10. Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen

    2018-01-01

    This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.

  11. Sci-Thur PM – Brachytherapy 03: Identifying the impact of seroma visualization on permanent breast seed implant brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, Daniel; Batchelar, Deidre; Hilts, Michelle

    Purpose: Uncertainties in target identification can reduce treatment accuracy in permanent breast seed implant (PBSI) brachytherapy. This study evaluates the relationship between seroma visualization and seed placement accuracy. Methods: Spatially co-registered CT and 3D ultrasound (US) images were acquired for 10 patients receiving PBSI. Seromas were retrospectively contoured independently by 3 radiation oncologists on both CT and US and respective consensus volumes were defined, CTV{sub CT} and CTV{sub US}. The seroma clarity and inter-user conformity index (CI), as well as inter-modality CI, volume, and positional differences were evaluated. Correlations with seed placement accuracy were then assessed. CTVs were expanded bymore » 1.25cm to create PTV{sub CT} and PTV{sub US} and evaluate the conformity with PTV{sub Clinical} (CTV{sub Clinical}+1.25cm) used in treatment planning. The change in PTV coincidence by expanding PTV{sub Clinical} by 0.25cm was determined. Results: CTV{sub US} were a mean 68 ± 12% smaller than CTV{sub CT} and generally had improved clarity and inter-user conformity. No correlations between seed displacement and CTV{sub US}-CTV{sub CT} positional difference or CI were observed. Greater seed displacements were associated with larger CTV{sub US}-CTV{sub CT} volume differences (r=−0.65) and inter-user CT CI (r=−0.74). A median (range) 88% (71–99%) of PTV{sub CT} and 83% (69–100%) of PTV{sub US} were contained within PTV{sub Clinical}. Expanding treatment margins to 1.5cm increased coincidence to 98% (86–100%) and 94% (82–100%), respectively. Conclusions: Differences in seroma visualization impacts seed displacement in PBSI. Reducing dependence on CT by incorporating 3DUS into target identification, or expanding CT-based treatment margins to 1.5cm may reduce or mitigate uncertainties related to seroma visualization.« less

  12. EURAMET.M.P-S9: comparison in the negative gauge pressure range -950 to 0 hPa

    NASA Astrophysics Data System (ADS)

    Saxholm, S.; Otal, P.; AltintaS, A.; Bermanec, L. G.; Durgut, Y.; Hanrahan, R.; Kocas, I.; Lefkopoulos, A.; Pražák, D.; Sandu, I.; Åetina, J.; Spohr, I.; Steindl, D.; Tammik, K.; Testa, N.

    2016-01-01

    A comparison in the negative gauge pressure range was arranged in the period 2011 - 2012. A total of 14 laboratories participated in this comparison: BEV (Austria), CMI (Czech Republic), DANIAmet-FORCE (Denmark), EIM (Greece), HMI/FSB-LPM (Croatia), INM (Romania), IPQ (Portugal), LNE (France), MCCAA (Malta), METROSERT (Estonia), MIKES (Finland), MIRS/IMT/LMT (Slovenia), NSAI (Ireland) and UME (Turkey). The project was divided into two loops: Loop1, piloted by MIKES, and Loop2, piloted by LNE. The results of the two loops are reported separately: Loop1 results are presented in this paper. The transfer standard was Beamex MC5 no. 25516865 with internal pressure module INT1C, resolution 0.01 hPa. The nominal pressure range of the INT1C is -1000 hPa to +1000 hPa. The nominal pressure points for the comparison were 0 hPa, -200 hPa, -400 hPa, -600 hPa, -800 hPa and -950 hPa. The reference values and their uncertainties as well as the difference uncertainty between the laboratory results and the reference values were determined from the measurement data by Monte Carlo simulations. Stability uncertainty of the transfer standard was included in the final difference uncertainty. Degrees of equivalences and mutual equivalences between the laboratories were calculated. Each laboratory reported results for all twelve measurement points, which means that there were 168 reported values in total. Some 163 of the 168 values (97 %) agree with the reference values within the expanded uncertainties, with a coverage factor k = 2. Among the laboratories, four different methods were used to determine negative gauge pressure. It is concluded that special attention must be paid to the measurements and methods when measuring negative gauge pressures. There might be a need for a technical guide or a workshop that provides information about details and practices related to the measurements of negative gauge pressure, as well as differences between the different methods. The comparison is registered as EURAMET project no. 1170 and as a supplementary comparison EURAMET.M.P-S9 in the BIPM key comparison database. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  13. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  14. Uncertainties in atmospheric muon-neutrino fluxes arising from cosmic-ray primaries

    NASA Astrophysics Data System (ADS)

    Evans, Justin; Garcia Gamez, Diego; Porzio, Salvatore Davide; Söldner-Rembold, Stefan; Wren, Steven

    2017-01-01

    We present an updated calculation of the uncertainties on the atmospheric muon-neutrino flux arising from cosmic-ray primaries. For the first time, we include recent measurements of the cosmic-ray primaries collected since 2005. We apply a statistical technique that allows the determination of correlations between the parameters of the Gaisser, Stanev, Honda, and Lipari primary-flux parametrization and the incorporation of these correlations into the uncertainty on the muon-neutrino flux. We obtain an uncertainty related to the primary cosmic rays of around (5-15)%, depending on energy, which is about a factor of 2 smaller than the previously determined uncertainty. The hadron production uncertainty is added in quadrature to obtain the total uncertainty on the neutrino flux, which is reduced by ≈5 % . To take into account an unexpected hardening of the spectrum of primaries above energies of 100 GeV observed in recent measurements, we propose an alternative parametrization and discuss its impact on the neutrino flux uncertainties.

  15. Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.

  16. Exploring the Impact of Individualism and Uncertainty Avoidance in Web-Based Electronic Learning: An Empirical Analysis in European Higher Education

    ERIC Educational Resources Information Center

    Sanchez-Franco, Manuel J.; Martinez-Lopez, Francisco J.; Martin-Velicia, Felix A.

    2009-01-01

    Our research specifically focuses on the effects of the national cultural background of educators on the acceptance and usage of ICT, particularly the Web as an extensive and expanding information base that provides the ultimate in resource-rich learning. Most research has been used North Americans as subjects. For this reason, we interviewed…

  17. A Tool for Estimating Variability in Wood Preservative Treatment Retention

    Treesearch

    Patricia K. Lebow; Adam M. Taylor; Timothy M. Young

    2015-01-01

    Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...

  18. Introducing Risk Analysis and Calculation of Profitability under Uncertainty in Engineering Design

    ERIC Educational Resources Information Center

    Kosmopoulou, Georgia; Freeman, Margaret; Papavassiliou, Dimitrios V.

    2011-01-01

    A major challenge that chemical engineering graduates face at the modern workplace is the management and operation of plants under conditions of uncertainty. Developments in the fields of industrial organization and microeconomics offer tools to address this challenge with rather well developed concepts, such as decision theory and financial risk…

  19. Uncertainty in nutrient loads from tile drained landscapes: Effect of sampling frequency, calculation algorithm, and compositing strategies

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of watershed-scale nutrient load estimates...

  20. Uncertainties of simulated aerosol optical properties induced by assumptions on aerosol physical and chemical properties: an AQMEII-2 perspective

    EPA Science Inventory

    The calculation of aerosol optical properties from aerosol mass is a process subject to uncertainty related to necessary assumptions on the treatment of the chemical species mixing state, density, refractive index, and hygroscopic growth. In the framework of the AQMEII-2 model in...

  1. Assessing Uncertainties in a Simple and Cheap Experiment

    ERIC Educational Resources Information Center

    de Souza, Paulo A., Jr.; Brasil, Gutemberg Hespanha

    2009-01-01

    This paper describes how to calculate measurement uncertainties using as a practical example the assessment of the thickness of ping-pong balls and their material density. The advantages of a randomized experiment are also discussed. This experiment can be reproduced in the physics laboratory for undergraduate students. (Contains 7 tables, 1…

  2. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    USGS Publications Warehouse

    Pidlisecky, Adam; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  3. Addressing uncertainty in atomistic machine learning.

    PubMed

    Peterson, Andrew A; Christensen, Rune; Khorshidi, Alireza

    2017-05-10

    Machine-learning regression has been demonstrated to precisely emulate the potential energy and forces that are output from more expensive electronic-structure calculations. However, to predict new regions of the potential energy surface, an assessment must be made of the credibility of the predictions. In this perspective, we address the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlight how uncertainty analysis can be used to assess the validity of machine-learning predictions. We suggest this will allow researchers to more fully use machine learning for the routine acceleration of large, high-accuracy, or extended-time simulations. In our demonstrations, we use a bootstrap ensemble of neural network-based calculators, and show that the width of the ensemble can provide an estimate of the uncertainty when the width is comparable to that in the training data. Intriguingly, we also show that the uncertainty can be localized to specific atoms in the simulation, which may offer hints for the generation of training data to strategically improve the machine-learned representation.

  4. EFFECTS OF LASER RADIATION ON MATTER: Calculation of the gain of a C VI laser plasma expanding as a cylinder and a cylindrical layer

    NASA Astrophysics Data System (ADS)

    Gulov, A. V.; Derzhiev, V. I.; Zhidkov, A. G.; Pritula, A. G.; Chekmezov, A. N.; Yakovlenko, Sergei I.

    1990-08-01

    Calculations are reported of the gain due to the 3-2 transition in the C VI ion in an expanding plasma cylinder or a cylindrical layer. Under the conditions in the experiments at the Rutherford Appleton Laboratory (Chilton, England) amplification was observed as a result of evaporation of a fairly thin (~ 0.1 μm) cylindrical layer. A peak of the gain was reached in a relatively short time (~ 0.1 ns).

  5. A New Framework for Quantifying Lidar Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.

    2017-03-24

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classificationmore » of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.« less

  6. Estimating the uncertainty from sampling in pollution crime investigation: The importance of metrology in the forensic interpretation of environmental data.

    PubMed

    Barazzetti Barbieri, Cristina; de Souza Sarkis, Jorge Eduardo

    2018-07-01

    The forensic interpretation of environmental analytical data is usually challenging due to the high geospatial variability of these data. The measurements' uncertainty includes contributions from the sampling and from the sample handling and preparation processes. These contributions are often disregarded in analytical techniques results' quality assurance. A pollution crime investigation case was used to carry out a methodology able to address these uncertainties in two different environmental compartments, freshwater sediments and landfill leachate. The methodology used to estimate the uncertainty was the duplicate method (that replicates predefined steps of the measurement procedure in order to assess its precision) and the parameters used to investigate the pollution were metals (Cr, Cu, Ni, and Zn) in the leachate, the suspect source, and in the sediment, the possible sink. The metal analysis results were compared to statutory limits and it was demonstrated that Cr and Ni concentrations in sediment samples exceeded the threshold levels at all sites downstream the pollution sources, considering the expanded uncertainty U of the measurements and a probability of contamination >0.975, at most sites. Cu and Zn concentrations were above the statutory limits at two sites, but the classification was inconclusive considering the uncertainties of the measurements. Metal analyses in leachate revealed that Cr concentrations were above the statutory limits with a probability of contamination >0.975 in all leachate ponds while the Cu, Ni and Zn probability of contamination was below 0.025. The results demonstrated that the estimation of the sampling uncertainty, which was the dominant component of the combined uncertainty, is required for a comprehensive interpretation of the environmental analyses results, particularly in forensic cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Equifinality and process-based modelling

    NASA Astrophysics Data System (ADS)

    Khatami, S.; Peel, M. C.; Peterson, T. J.; Western, A. W.

    2017-12-01

    Equifinality is understood as one of the fundamental difficulties in the study of open complex systems, including catchment hydrology. A review of the hydrologic literature reveals that the term equifinality has been widely used, but in many cases inconsistently and without coherent recognition of the various facets of equifinality, which can lead to ambiguity but also methodological fallacies. Therefore, in this study we first characterise the term equifinality within the context of hydrological modelling by reviewing the genesis of the concept of equifinality and then presenting a theoretical framework. During past decades, equifinality has mainly been studied as a subset of aleatory (arising due to randomness) uncertainty and for the assessment of model parameter uncertainty. Although the connection between parameter uncertainty and equifinality is undeniable, we argue there is more to equifinality than just aleatory parameter uncertainty. That is, the importance of equifinality and epistemic uncertainty (arising due to lack of knowledge) and their implications is overlooked in our current practice of model evaluation. Equifinality and epistemic uncertainty in studying, modelling, and evaluating hydrologic processes are treated as if they can be simply discussed in (or often reduced to) probabilistic terms (as for aleatory uncertainty). The deficiencies of this approach to conceptual rainfall-runoff modelling are demonstrated for selected Australian catchments by examination of parameter and internal flux distributions and interactions within SIMHYD. On this basis, we present a new approach that expands equifinality concept beyond model parameters to inform epistemic uncertainty. The new approach potentially facilitates the identification and development of more physically plausible models and model evaluation schemes particularly within the multiple working hypotheses framework, and is generalisable to other fields of environmental modelling as well.

  8. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  9. Beam quality corrections for parallel-plate ion chambers in electron reference dosimetry

    NASA Astrophysics Data System (ADS)

    Zink, K.; Wulff, J.

    2012-04-01

    Current dosimetry protocols (AAPM, IAEA, IPEM, DIN) recommend parallel-plate ionization chambers for dose measurements in clinical electron beams. This study presents detailed Monte Carlo simulations of beam quality correction factors for four different types of parallel-plate chambers: NACP-02, Markus, Advanced Markus and Roos. These chambers differ in constructive details which should have notable impact on the resulting perturbation corrections, hence on the beam quality corrections. The results reveal deviations to the recommended beam quality corrections given in the IAEA TRS-398 protocol in the range of 0%-2% depending on energy and chamber type. For well-guarded chambers, these deviations could be traced back to a non-unity and energy-dependent wall perturbation correction. In the case of the guardless Markus chamber, a nearly energy-independent beam quality correction is resulting as the effects of wall and cavity perturbation compensate each other. For this chamber, the deviations to the recommended values are the largest and may exceed 2%. From calculations of type-B uncertainties including effects due to uncertainties of the underlying cross-sectional data as well as uncertainties due to the chamber material composition and chamber geometry, the overall uncertainty of calculated beam quality correction factors was estimated to be <0.7%. Due to different chamber positioning recommendations given in the national and international dosimetry protocols, an additional uncertainty in the range of 0.2%-0.6% is present. According to the IAEA TRS-398 protocol, the uncertainty in clinical electron dosimetry using parallel-plate ion chambers is 1.7%. This study may help to reduce this uncertainty significantly.

  10. [Uncertainty of cross calibration-applied beam quality conversion factor for the Japan Society of Medical Physics 12].

    PubMed

    Kinoshita, Naoki; Kita, Akinobu; Takemura, Akihiro; Nishimoto, Yasuhiro; Adachi, Toshiki

    2014-09-01

    The uncertainty of the beam quality conversion factor (k(Q,Q0)) of standard dosimetry of absorbed dose to water in external beam radiotherapy 12 (JSMP12) is determined by combining the uncertainty of each beam quality conversion factor calculated for each type of ionization chamber. However, there is no guarantee that ionization chambers of the same type have the same structure and thickness, so there may be individual variations. We evaluated the uncertainty of k(Q,Q0) for JSMP12 using an ionization chamber dosimeter and linear accelerator without a specific device or technique in consideration of the individual variation of ionization chambers and in clinical radiation field. The cross calibration formula was modified and the beam quality conversion factor for the experimental values [(k(Q,Q0))field] determined using the modified formula. It's uncertainty was calculated to be 1.9%. The differences between (k(Q,Q0))field of experimental values and k(Q,Q0) for Japan Society of Medical Physics 12 (JSMP12) were 0.73% and 0.88% for 6- and 10-MV photon beams, respectively, remaining within ± 1.9%. This showed k(Q,Q0) for JSMP12 to be consistent with (k(Q,Q0))field of experimental values within the estimated uncertainty range. Although inter-individual differences may be generated, even when the same type of ionized chamber is used, k(Q,Q0) for JSMP12 appears to be consistent within the estimated uncertainty range of (k(Q,Q0)field.

  11. Parameter uncertainty and variability in evaluative fate and exposure models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertwich, E.G.; McKone, T.E.; Pease, W.S.

    The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows one to focus resources if the uncertainty is to be reduced. The potentialmore » does of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.« less

  12. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  13. Best Practices of Uncertainty Estimation for the National Solar Radiation Database (NSRDB 1998-2015): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Sengupta, Manajit

    It is essential to apply a traceable and standard approach to determine the uncertainty of solar resource data. Solar resource data are used for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation, and to determine performance guarantees of solar energy conversion systems. These guarantees are based on the available solar resource derived from a measurement station or modeled data set such as the National Solar Radiation Database (NSRDB). Therefore, quantifying the uncertainty of these data sets provides confidence to financiers, developers, and site operators of solar energy conversion systems and ultimatelymore » reduces deployment costs. In this study, we implemented the Guide to the Expression of Uncertainty in Measurement (GUM) 1 to quantify the overall uncertainty of the NSRDB data. First, we start with quantifying measurement uncertainty, then we determine each uncertainty statistic of the NSRDB data, and we combine them using the root-sum-of-the-squares method. The statistics were derived by comparing the NSRDB data to the seven measurement stations from the National Oceanic and Atmospheric Administration's Surface Radiation Budget Network, National Renewable Energy Laboratory's Solar Radiation Research Laboratory, and the Atmospheric Radiation Measurement program's Southern Great Plains Central Facility, in Billings, Oklahoma. The evaluation was conducted for hourly values, daily totals, monthly mean daily totals, and annual mean monthly mean daily totals. Varying time averages assist to capture the temporal uncertainty of the specific modeled solar resource data required for each phase of a solar energy project; some phases require higher temporal resolution than others. Overall, by including the uncertainty of measurements of solar radiation made at ground stations, bias, and root mean square error, the NSRDB data demonstrated expanded uncertainty of 17 percent - 29 percent on hourly and an approximate 5 percent - 8 percent annual bases.« less

  14. Probabilistic accounting of uncertainty in forecasts of species distributions under climate change

    USGS Publications Warehouse

    Wenger, Seth J.; Som, Nicholas A.; Dauwalter, Daniel C.; Isaak, Daniel J.; Neville, Helen M.; Luce, Charles H.; Dunham, Jason B.; Young, Michael K.; Fausch, Kurt D.; Rieman, Bruce E.

    2013-01-01

    Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing models (model uncertainty), and uncertainty in future climate conditions (climate uncertainty) to produce site-specific frequency distributions of occurrence probabilities across a species’ range. We illustrated the method by forecasting suitable habitat for bull trout (Salvelinus confluentus) in the Interior Columbia River Basin, USA, under recent and projected 2040s and 2080s climate conditions. The 95% interval of total suitable habitat under recent conditions was estimated at 30.1–42.5 thousand km; this was predicted to decline to 0.5–7.9 thousand km by the 2080s. Projections for the 2080s showed that the great majority of stream segments would be unsuitable with high certainty, regardless of the climate data set or bull trout model employed. The largest contributor to uncertainty in total suitable habitat was climate uncertainty, followed by parameter uncertainty and model uncertainty. Our approach makes it possible to calculate a full distribution of possible outcomes for a species, and permits ready graphical display of uncertainty for individual locations and of total habitat.

  15. Uncertainty propagation in q and current profiles derived from motional Stark effect polarimetry on TFTR (abstract)a)

    NASA Astrophysics Data System (ADS)

    Batha, S. H.; Levinton, F. M.; Bell, M. G.; Wieland, R. M.; Hirschman, S. P.

    1995-01-01

    The magnetic-field pitch-angle profile, γp(R)≡arctan(Bpol/Btor), is measured on the TFTR tokamak using a motional Stark effect (MSE) polarimeter. Measured profiles are converted to q profiles with the equilibrium code vmec. Uncertainties in the q profile due to uncertainties in the γp(R), magnetics, and kinetic measurements are quantified. Subsequent uncertainties in the vmec-calculated profiles of current density and shear, both of which are important for stability and transport analyses, are also quantified. Examples of circular plasmas under various confinement modes, including the supershot and L mode, will be given.

  16. Considering the ranges of uncertainties in the New Probabilistic Seismic Hazard Assessment of Germany - Version 2016

    NASA Astrophysics Data System (ADS)

    Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino

    2017-04-01

    The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.

  17. SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroom, J; Vieira, S; Greco, C

    2014-06-01

    Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coveragemore » was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are accurate under varying circumstances. However margins can change up to 50% for different situations.« less

  18. Nationwide tsunami hazard assessment project in Japan

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2014-12-01

    In 2012, we began a project of nationwide Probabilistic Tsunami Hazard Assessment (PTHA) in Japan to support various measures (Fujiwara et al., 2013, JpGU; Hirata et al., 2014, AOGS). The most important strategy in the nationwide PTHA is predominance of aleatory uncertainty in the assessment but use of epistemic uncertainty is limited to the minimum, because the number of all possible combinations among epistemic uncertainties diverges quickly when the number of epistemic uncertainties in the assessment increases ; we consider only a type of earthquake occurrence probability distribution as epistemic uncertainty. We briefly show outlines of the nationwide PTHA as follows; (i) we consider all possible earthquakes in the future, including those that the Headquarters for Earthquake Research Promotion (HERP) of Japanese Government, already assessed. (ii) We construct a set of simplified earthquake fault models, called "Characterized Earthquake Fault Models (CEFMs)", for all of the earthquakes by following prescribed rules (Toyama et al., 2014, JpGU; Korenaga et al., 2014, JpGU). (iii) For all of initial water surface distributions caused by a number of the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. (iv) Finally, we integrate information about the tsunamis calculated from the numerous CEFMs to get nationwide tsunami hazard assessments. One of the most popular representations of the integrated information is a tsunami hazard curve for coastal tsunami heights, incorporating uncertainties inherent in tsunami simulation and earthquake fault slip heterogeneity (Abe et al., 2014, JpGU). We will show a PTHA along the eastern coast of Honshu, Japan, based on approximately 1,800 tsunami sources located within the subduction zone along the Japan Trench, as a prototype of the nationwide PTHA. This study is supported by part of the research project on research on evaluation of hazard and risk of natural disasters, under the direction of the HERP of Japanese Government.

  19. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  20. SU-E-T-381: Evaluation of Calculated Dose Accuracy for Organs-At-Risk Located at Out-Of-Field in a Commercial Treatment Planning System for High Energy Photon Beams Produced From TrueBeam Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L; Ding, G

    Purpose: Dose calculation accuracy for the out-of-field dose is important for predicting the dose to the organs-at-risk when they are located outside primary beams. The investigations on evaluating the calculation accuracy of treatment planning systems (TPS) on out-of-field dose in existing publications have focused on low energy (6MV) photon. This study evaluates out-of-field dose calculation accuracy of AAA algorithm for 15MV high energy photon beams. Methods: We used the EGSnrc Monte Carlo (MC) codes to evaluate the AAA algorithm in Varian Eclipse TPS (v.11). The incident beams start with validated Varian phase-space sources for a TrueBeam linac equipped with Millenniummore » 120 MLC. Dose comparisons between using AAA and MC for CT based realistic patient treatment plans using VMAT techniques for prostate and lung were performed and uncertainties of organ dose predicted by AAA at out-of-field location were evaluated. Results: The results show that AAA calculations under-estimate doses at the dose level of 1% (or less) of prescribed dose for CT based patient treatment plans using VMAT techniques. In regions where dose is only 1% of prescribed dose, although AAA under-estimates the out-of-field dose by 30% relative to the local dose, it is only about 0.3% of prescribed dose. For example, the uncertainties of calculated organ dose to liver or kidney that is located out-of-field is <0.3% of prescribed dose. Conclusion: For 15MV high energy photon beams, very good agreements (<1%) in calculating dose distributions were obtained between AAA and MC. The uncertainty of out-of-field dose calculations predicted by the AAA algorithm for realistic patient VMAT plans is <0.3% of prescribed dose in regions where the dose relative to the prescribed dose is <1%, although the uncertainties can be much larger relative to local doses. For organs-at-risk located at out-of-field, the error of dose predicted by Eclipse using AAA is negligible. This work was conducted in part using the resources of Varian research grant VUMC40590-R.« less

  1. Bright gamma-ray Galactic Center excess and dark dwarfs: Strong tension for dark matter annihilation despite Milky Way halo profile and diffuse emission uncertainties

    NASA Astrophysics Data System (ADS)

    Abazajian, Kevork N.; Keeley, Ryan E.

    2016-04-01

    We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, two of the most precise empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses for single-channel dark matter annihilation models. The GCE and dwarf tension can be alleviated if: one, the halo is very highly concentrated or strongly contracted; two, the dark matter annihilation signal differentiates between dwarfs and the GC; or, three, local stellar density measures are found to be significantly lower, like that from recent stellar counts, increasing the local dark matter density.

  2. Position uncertainty distribution for articulated arm coordinate measuring machine based on simplified definite integration

    NASA Astrophysics Data System (ADS)

    You, Xu; Zhi-jian, Zong; Qun, Gao

    2018-07-01

    This paper describes a methodology for the position uncertainty distribution of an articulated arm coordinate measuring machine (AACMM). First, a model of the structural parameter uncertainties was established by statistical method. Second, the position uncertainty space volume of the AACMM in a certain configuration was expressed using a simplified definite integration method based on the structural parameter uncertainties; it was then used to evaluate the position accuracy of the AACMM in a certain configuration. Third, the configurations of a certain working point were calculated by an inverse solution, and the position uncertainty distribution of a certain working point was determined; working point uncertainty can be evaluated by the weighting method. Lastly, the position uncertainty distribution in the workspace of the ACCMM was described by a map. A single-point contrast test of a 6-joint AACMM was carried out to verify the effectiveness of the proposed method, and it was shown that the method can describe the position uncertainty of the AACMM and it was used to guide the calibration of the AACMM and the choice of AACMM’s accuracy area.

  3. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  4. Aerosol direct radiative effects over the northwest Atlantic, northwest Pacific, and North Indian Oceans: estimates based on in-situ chemical and optical measurements and chemical transport modeling

    NASA Astrophysics Data System (ADS)

    Bates, T. S.; Anderson, T. L.; Baynard, T.; Bond, T.; Boucher, O.; Carmichael, G.; Clarke, A.; Erlick, C.; Guo, H.; Horowitz, L.; Howell, S.; Kulkarni, S.; Maring, H.; McComiskey, A.; Middlebrook, A.; Noone, K.; O'Dowd, C. D.; Ogren, J.; Penner, J.; Quinn, P. K.; Ravishankara, A. R.; Savoie, D. L.; Schwartz, S. E.; Shinozuka, Y.; Tang, Y.; Weber, R. J.; Wu, Y.

    2006-05-01

    The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar) radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001). Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO) during INDOEX, the Northwest Pacific Ocean (NWP) during ACE-Asia, and the Northwest Atlantic Ocean (NWA) during ICARTT), incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART). Measurements of burdens, extinction optical depth (AOD), and direct radiative effect of aerosols (DRE - change in radiative flux due to total aerosols) are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity) are used as input parameters to two radiative transfer models (GFDL and University of Michigan) to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative transfer calculations by observational inputs increases the clear-sky, 24-h averaged AOD (34±8%), top of atmosphere (TOA) DRE (32±12%), and TOA direct climate forcing of aerosols (DCF - change in radiative flux due to anthropogenic aerosols) (37±7%) relative to values obtained with "a priori" parameterizations of aerosol loadings and properties (GFDL RTM). The resulting constrained clear-sky TOA DCF is -3.3±0.47, -14±2.6, -6.4±2.1 Wm-2 for the NIO, NWP, and NWA, respectively. With the use of constrained quantities (extensive and intensive parameters) the calculated uncertainty in DCF was 25% less than the "structural uncertainties" used in the IPCC-2001 global estimates of direct aerosol climate forcing. Such comparisons with observations and resultant reductions in uncertainties are essential for improving and developing confidence in climate model calculations incorporating aerosol forcing.

  5. Hot and cold body reference noise generators from 0 to 40 GHz

    NASA Technical Reports Server (NTRS)

    Hornbostel, D. H.

    1974-01-01

    This article describes the design, development, and analysis of exceptionally accurate radiometric noise generators from 0-40 GHz to serve as standard references. Size, weight, power, and reliability are optimized to meet the requirements of NASA air- and space-borne radiometers. The radiometric noise temperature of these noise generators is, unavoidably, calculated from measured values rather than measured directly. The absolute accuracy and stability are equal to or better than those of reliable standards available for comparison. A noise generator has been developed whose measurable properties (VSWR, line loss, thermometric temperatures) have been optimized in order to minimize the effects of the uncertainty in the calculated radiometric noise temperatures. Each measurable property is evaluated and analyzed to determine the effects of the uncertainty of the measured value. Unmeasurable properties (primarily temperature gradients) are analyzed, and reasonable precautions are designed into the noise generator to guarantee that the uncertainty of the value remains within tolerable limits.

  6. Bs and Ds decay constants in three-flavor lattice QCD.

    PubMed

    Wingate, Matthew; Davies, Christine T H; Gray, Alan; Lepage, G Peter; Shigemitsu, Junko

    2004-04-23

    Capitalizing on recent advances in lattice QCD, we present a calculation of the leptonic decay constants f(B(s)) and f(D(s)) that includes effects of one strange sea quark and two light sea quarks via an improved staggered action. By shedding the quenched approximation and the associated lattice scale uncertainty, lattice QCD greatly increases its predictive power. Nonrelativistic QCD is used to simulate heavy quarks with masses between 1.5m(c) and m(b). We arrive at the following results: f(B(s))=260+/-7+/-26+/-8+/-5 and f(D(s))=290+/-20+/-29+/-29+/-6 MeV. The first quoted error is the statistical uncertainty, and the rest estimate the sizes of higher order terms neglected in this calculation. All of these uncertainties are systematically improvable by including another order in the weak coupling expansion, the nonrelativistic expansion, or the Symanzik improvement program.

  7. Space Shuttle Orbiter flight heating rate measurement sensitivity to thermal protection system uncertainties

    NASA Technical Reports Server (NTRS)

    Bradley, P. F.; Throckmorton, D. A.

    1981-01-01

    A study was completed to determine the sensitivity of computed convective heating rates to uncertainties in the thermal protection system thermal model. Those parameters considered were: density, thermal conductivity, and specific heat of both the reusable surface insulation and its coating; coating thickness and emittance; and temperature measurement uncertainty. The assessment used a modified version of the computer program to calculate heating rates from temperature time histories. The original version of the program solves the direct one dimensional heating problem and this modified version of The program is set up to solve the inverse problem. The modified program was used in thermocouple data reduction for shuttle flight data. Both nominal thermal models and altered thermal models were used to determine the necessity for accurate knowledge of thermal protection system's material thermal properties. For many thermal properties, the sensitivity (inaccuracies created in the calculation of convective heating rate by an altered property) was very low.

  8. Estimating the uncertainty of calculated out-of-field organ dose from a commercial treatment planning system.

    PubMed

    Wang, Lilie; Ding, George X

    2018-06-12

    Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data

    NASA Technical Reports Server (NTRS)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.

    2013-01-01

    The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.

  10. Problems encountered when defining Arctic amplification as a ratio

    PubMed Central

    Hind, Alistair; Zhang, Qiong; Brattström, Gudrun

    2016-01-01

    In climate change science the term ‘Arctic amplification’ has become synonymous with an estimation of the ratio of a change in Arctic temperatures compared with a broader reference change under the same period, usually in global temperatures. Here, it is shown that this definition of Arctic amplification comes with a suite of difficulties related to the statistical properties of the ratio estimator itself. Most problematic is the complexity of categorizing uncertainty in Arctic amplification when the global, or reference, change in temperature is close to 0 over a period of interest, in which case it may be impossible to set bounds on this uncertainty. An important conceptual distinction is made between the ‘Ratio of Means’ and ‘Mean Ratio’ approaches to defining a ratio estimate of Arctic amplification, as they do not only possess different uncertainty properties regarding the amplification factor, but are also demonstrated to ask different scientific questions. Uncertainty in the estimated range of the Arctic amplification factor using the latest global climate models and climate forcing scenarios is expanded upon and shown to be greater than previously demonstrated for future climate projections, particularly using forcing scenarios with lower concentrations of greenhouse gases. PMID:27461918

  11. Problems encountered when defining Arctic amplification as a ratio.

    PubMed

    Hind, Alistair; Zhang, Qiong; Brattström, Gudrun

    2016-07-27

    In climate change science the term 'Arctic amplification' has become synonymous with an estimation of the ratio of a change in Arctic temperatures compared with a broader reference change under the same period, usually in global temperatures. Here, it is shown that this definition of Arctic amplification comes with a suite of difficulties related to the statistical properties of the ratio estimator itself. Most problematic is the complexity of categorizing uncertainty in Arctic amplification when the global, or reference, change in temperature is close to 0 over a period of interest, in which case it may be impossible to set bounds on this uncertainty. An important conceptual distinction is made between the 'Ratio of Means' and 'Mean Ratio' approaches to defining a ratio estimate of Arctic amplification, as they do not only possess different uncertainty properties regarding the amplification factor, but are also demonstrated to ask different scientific questions. Uncertainty in the estimated range of the Arctic amplification factor using the latest global climate models and climate forcing scenarios is expanded upon and shown to be greater than previously demonstrated for future climate projections, particularly using forcing scenarios with lower concentrations of greenhouse gases.

  12. Uncertainty analysis of gas flow measurements using clearance-sealed piston provers in the range from 0.0012 g min-1 to 60 g min-1

    NASA Astrophysics Data System (ADS)

    Bobovnik, G.; Kutin, J.; Bajsić, I.

    2016-08-01

    This paper deals with an uncertainty analysis of gas flow measurements using a compact, high-speed, clearance-sealed realization of a piston prover. A detailed methodology for the uncertainty analysis, covering the components due to the gas density, dimensional and time measurements, the leakage flow, the density correction factor and the repeatability, is presented. The paper also deals with the selection of the isothermal and adiabatic measurement models, the treatment of the leakage flow and discusses the need for averaging multiple consecutive readings of the piston prover. The analysis is prepared for the flow range (50 000:1) covered by the three interchangeable flow cells. The results show that using the adiabatic measurement model and averaging the multiple readings, the estimated expanded measurement uncertainty of the gas mass flow rate is less than 0.15% in the flow range above 0.012 g min-1, whereas it increases for lower mass flow rates due to the leakage flow related effects. At the upper end of the measuring range, using the adiabatic instead of the isothermal measurement model, as well as averaging multiple readings, proves important.

  13. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    PubMed

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  14. Application of stochastic multiattribute analysis to assessment of single walled carbon nanotube synthesis processes.

    PubMed

    Canis, Laure; Linkov, Igor; Seager, Thomas P

    2010-11-15

    The unprecedented uncertainty associated with engineered nanomaterials greatly expands the need for research regarding their potential environmental consequences. However, decision-makers such as regulatory agencies, product developers, or other nanotechnology stakeholders may not find the results of such research directly informative of decisions intended to mitigate environmental risks. To help interpret research findings and prioritize new research needs, there is an acute need for structured decision-analytic aids that are operable in a context of extraordinary uncertainty. Whereas existing stochastic decision-analytic techniques explore uncertainty only in decision-maker preference information, this paper extends model uncertainty to technology performance. As an illustrative example, the framework is applied to the case of single-wall carbon nanotubes. Four different synthesis processes (arc, high pressure carbon monoxide, chemical vapor deposition, and laser) are compared based on five salient performance criteria. A probabilistic rank ordering of preferred processes is determined using outranking normalization and a linear-weighted sum for different weighting scenarios including completely unknown weights and four fixed-weight sets representing hypothetical stakeholder views. No single process pathway dominates under all weight scenarios, but it is likely that some inferior process technologies could be identified as low priorities for further research.

  15. Reducing uncertainty in wind turbine blade health inspection with image processing techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huiyi

    Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.

  16. Chronic beryllium disease and cancer risk estimates with uncertainty for beryllium released to the air from the Rocky Flats Plant.

    PubMed Central

    McGavran, P D; Rood, A S; Till, J E

    1999-01-01

    Beryllium was released into the air from routine operations and three accidental fires at the Rocky Flats Plant (RFP) in Colorado from 1958 to 1989. We evaluated environmental monitoring data and developed estimates of airborne concentrations and their uncertainties and calculated lifetime cancer risks and risks of chronic beryllium disease to hypothetical receptors. This article discusses exposure-response relationships for lung cancer and chronic beryllium disease. We assigned a distribution to cancer slope factor values based on the relative risk estimates from an occupational epidemiologic study used by the U.S. Environmental Protection Agency (EPA) to determine the slope factors. We used the regional atmospheric transport code for Hanford emission tracking atmospheric transport model for exposure calculations because it is particularly well suited for long-term annual-average dispersion estimates and it incorporates spatially varying meteorologic and environmental parameters. We accounted for model prediction uncertainty by using several multiplicative stochastic correction factors that accounted for uncertainty in the dispersion estimate, the meteorology, deposition, and plume depletion. We used Monte Carlo techniques to propagate model prediction uncertainty through to the final risk calculations. We developed nine exposure scenarios of hypothetical but typical residents of the RFP area to consider the lifestyle, time spent outdoors, location, age, and sex of people who may have been exposed. We determined geometric mean incremental lifetime cancer incidence risk estimates for beryllium inhalation for each scenario. The risk estimates were < 10(-6). Predicted air concentrations were well below the current reference concentration derived by the EPA for beryllium sensitization. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 PMID:10464074

  17. Size exclusion deep bed filtration: Experimental and modelling uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser

    A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less

  18. From voxel to curvature

    NASA Astrophysics Data System (ADS)

    Monga, Olivier; Ayache, Nicholas; Sander, Peter T.

    1991-09-01

    Modern medical image techniques, such as magnetic resonance image (MRI) or x-ray computed tomography provide three dimensional images of internal structures of the body, usually by means of a stack of tomographic images. The first stage in the automatic analysis of such data is 3-D edge detection1,2 which provides points corresponding to the boundaries of the surfaces forming the 3-D structure. The next stage is to characterize the local geometry of these surfaces in order to extract points or lines on which registration and/or tracking procedures can rely.3,4,5,6 This paper presents a pipeline of processes which define a hierarchical description of the second order differential characteristics of the surfaces. The focus is on the theoretical coherence of these levels of representation. Using uncertainty, a link is established between the edge detection and the local surface approximation by addressing the uncertainties inherent to edge detection in 2-D or 3-D images; and how to incorporate these uncertainties into the computation of local geometric models. In particular, calculate the uncertainty of edge location, direction, and magnitude for the 3-D Deriche operator is calculated.1,2 Statistical results are then used as a solid theoretical foundation on which to base subsequent computations, such as the determination of local surface curvature using local geometric models for surface segmentation. From the local fitting, for each edge point the mean and Gaussian curvature, principal curvatures and directions, curvature singularities, lines of curvature singularities, and covariance matrices defining the uncertainties are calculated. Experimental results for real data using two 3-D scanner images of the same organ taken at different positions demonstrate the stability of the mean and Gaussian curvatures. Experimental results for real data showing the determination of local curvature extremes of surfaces extracted from MR images are presented.

  19. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    PubMed

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Visualizing Coastal Erosion, Overwash and Coastal Flooding in New England

    NASA Astrophysics Data System (ADS)

    Young Morse, R.; Shyka, T.

    2017-12-01

    Powerful East Coast storms and their associated storm tides and large, battering waves can lead to severe coastal change through erosion and re-deposition of beach sediment. The United States Geological Survey (USGS) has modeled such potential for geological response using a storm-impact scale that compares predicted elevations of hurricane-induced water levels and associated wave action to known elevations of coastal topography. The resulting storm surge and wave run-up hindcasts calculate dynamic surf zone collisions with dune structures using discrete regime categories of; "collision" (dune erosion), "overwash" and "inundation". The National Weather Service (NWS) recently began prototyping this empirical technique under the auspices of the North Atlantic Regional Team (NART). Real-time erosion and inundation forecasts were expanded to include both tropical and extra-tropical cyclones along vulnerable beaches (hotspots) on the New England coast. Preliminary results showed successful predictions of impact during hurricane Sandy and several intense Nor'easters. The forecasts were verified using observational datasets, including "ground truth" reports from Emergency Managers and storm-based, dune profile measurements organized through a Maine Sea Grant partnership. In an effort to produce real-time visualizations of this forecast output, the Northeastern Regional Association of Coastal Ocean Observing Systems (NERACOOS) and the Gulf of Maine Research Institute (GMRI) partnered with NART to create graphical products of wave run-up levels for each New England "hotspot". The resulting prototype system updates the forecasts twice daily and allows users the ability to adjust atmospheric and sea state input into the calculations to account for model errors and forecast uncertainty. This talk will provide an overview of the empirical wave run-up calculations, the system used to produce forecast output and a demonstration of the new web based tool.

  1. Modeling sugar cane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-01-01

    Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  2. Modeling sugarcane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-06-01

    Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugarcane cultivation in Australia and Brazil. The ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting different climate-mediated sensitivities of modeled sugarcane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  3. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  4. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    PubMed

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  5. Quantifying uncertainties in radar forward models through a comparison between CloudSat and SPartICus reflectivity factors

    NASA Astrophysics Data System (ADS)

    Mascio, Jeana; Mace, Gerald G.

    2017-02-01

    Interpretations of remote sensing measurements collected in sample volumes containing ice-phase hydrometeors are very sensitive to assumptions regarding the distributions of mass with ice crystal dimension, otherwise known as mass-dimensional or m-D relationships. How these microphysical characteristics vary in nature is highly uncertain, resulting in significant uncertainty in algorithms that attempt to derive bulk microphysical properties from remote sensing measurements. This uncertainty extends to radar reflectivity factors forward calculated from model output because the statistics of the actual m-D in nature is not known. To investigate the variability in m-D relationships in cirrus clouds, reflectivity factors measured by CloudSat are combined with particle size distributions (PSDs) collected by coincident in situ aircraft by using an optimal estimation-based (OE) retrieval of the m-D power law. The PSDs were collected by 12 flights of the Stratton Park Engineering Company Learjet during the Small Particles in Cirrus campaign. We find that no specific habit emerges as preferred, and instead, we find that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum-defying simple categorization. With the uncertainties derived from the OE algorithm, the uncertainties in forward-modeled backscatter cross section and, in turn, radar reflectivity is calculated by using a bootstrapping technique, allowing us to infer the uncertainties in forward-modeled radar reflectivity that would be appropriately applied to remote sensing simulator algorithms.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer; Clifton, Andrew; Bonin, Timothy

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing considermore » uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict errors in lidar-measured wind speed. The results show how uncertainty varies over time and can be used to help select data with different levels of uncertainty for different applications, for example, low uncertainty data for power performance testing versus all data for plant performance monitoring.« less

  7. Thermodynamic analyses and the experimental validation of the Pulse Tube Expander system

    NASA Astrophysics Data System (ADS)

    Jia, Qiming; Gong, Linghui; Feng, Guochao; Zou, Longhui

    2018-04-01

    A Pulse Tube Expander (PTE) for small and medium capacity cryogenic refrigeration systems is described in this paper. An analysis of the Pulse Tube Expander is developed based on the thermodynamic analyses of the system. It is shown that the gas expansion is isentropic in the cold end of the pulse tube. The temperature variation at the outlet of Pulse Tube Expander is measured and the isentropic efficiency is calculated to be 0.455 at 2 Hz. The pressure oscillations in the pulse tube are obtained at different frequencies. The limitations and advantages of this system are also discussed.

  8. Uncertainties in s -process nucleosynthesis in low mass stars determined from Monte Carlo variations

    NASA Astrophysics Data System (ADS)

    Cescutti, G.; Hirschi, R.; Nishimura, N.; den Hartogh, J. W.; Rauscher, T.; Murphy, A. St J.; Cristallo, S.

    2018-05-01

    The main s-process taking place in low mass stars produces about half of the elements heavier than iron. It is therefore very important to determine the importance and impact of nuclear physics uncertainties on this process. We have performed extensive nuclear reaction network calculations using individual and temperature-dependent uncertainties for reactions involving elements heavier than iron, within a Monte Carlo framework. Using this technique, we determined the uncertainty in the main s-process abundance predictions due to nuclear uncertainties link to weak interactions and neutron captures on elements heavier than iron. We also identified the key nuclear reactions dominating these uncertainties. We found that β-decay rate uncertainties affect only a few nuclides near s-process branchings, whereas most of the uncertainty in the final abundances is caused by uncertainties in neutron capture rates, either directly producing or destroying the nuclide of interest. Combined total nuclear uncertainties due to reactions on heavy elements are in general small (less than 50%). Three key reactions, nevertheless, stand out because they significantly affect the uncertainties of a large number of nuclides. These are 56Fe(n,γ), 64Ni(n,γ), and 138Ba(n,γ). We discuss the prospect of reducing uncertainties in the key reactions identified in this study with future experiments.

  9. Final report of the supplementary comparison EURAMET.EM-S31 comparison of capacitance and capacitance ratio

    NASA Astrophysics Data System (ADS)

    Schurr, J.; Fletcher, N.; Gournay, P.; Thévenot, O.; Overney, F.; Johnson, L.; Xie, R.; Dierikx, E.

    2017-01-01

    Within the framework of the supplementary comparison EURAMET.EM-S31, 'Comparison of capacitance and capacitance ratio', five participants (the BIPM, METAS, LNE, PTB, and VSL) inter-compared their capacitance realisations traced to the quantum Hall resistance measured at either ac or dc. The measurands were the capacitance values of three 10 pF standards and one 100 pF standard, and optionally their voltage and frequency dependences. Because the results were not fully satisfying, the circulation was repeated, augmented by a link to the NMIA calculable capacitor. Also two ac-dc resistors were circulated and their frequency dependences were measured in terms of the ac-dc resistance standards involved in the particular capacitance realisations, to allow inter-comparison of these resistance standards. At the end and in any case, a good agreement is achieved within the expanded uncertainties at coverage factor k = 2. Furthermore, the comparison led to new insight regarding the stability and travelling behaviour of the capacitance standards and, by virtue of the link to the NMIA calculable capacitor, to a determination of the von Klitzing constant in agreement with the 2014 CODATA value. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  10. A crater and its ejecta: An interpretation of Deep Impact

    NASA Astrophysics Data System (ADS)

    Holsapple, Keith A.; Housen, Kevin R.

    2007-03-01

    We apply recently updated scaling laws for impact cratering and ejecta to interpret observations of the Deep Impact event. An important question is whether the cratering event was gravity or strength-dominated; the answer gives important clues about the properties of the surface material of Tempel 1. Gravity scaling was assumed in pre-event calculations and has been asserted in initial studies of the mission results. Because the gravity field of Tempel 1 is extremely weak, a gravity-dominated event necessarily implies a surface with essentially zero strength. The conclusion of gravity scaling was based mainly on the interpretation that the impact ejecta plume remained attached to the comet during its evolution. We address that feature here, and conclude that even strength-dominated craters would result in a plume that appeared to remain attached to the surface. We then calculate the plume characteristics from scaling laws for a variety of material types, and for gravity and strength-dominated cases. We find that no model of cratering alone can match the reported observation of plume mass and brightness history. Instead, comet-like acceleration mechanisms such as expanding vapor clouds are required to move the ejected mass to the far field in a few-hour time frame. With such mechanisms, and to within the large uncertainties, either gravity or strength craters can provide the levels of estimated observed mass. Thus, the observations are unlikely to answer the questions about the mechanical nature of the Tempel 1 surface.

  11. A crater and its ejecta: An interpretation of Deep Impact

    NASA Astrophysics Data System (ADS)

    Holsapple, Keith A.; Housen, Kevin R.

    We apply recently updated scaling laws for impact cratering and ejecta to interpret observations of the Deep Impact event. An important question is whether the cratering event was gravity or strength-dominated; the answer gives important clues about the properties of the surface material of Tempel 1. Gravity scaling was assumed in pre-event calculations and has been asserted in initial studies of the mission results. Because the gravity field of Tempel 1 is extremely weak, a gravity-dominated event necessarily implies a surface with essentially zero strength. The conclusion of gravity scaling was based mainly on the interpretation that the impact ejecta plume remained attached to the comet during its evolution. We address that feature here, and conclude that even strength-dominated craters would result in a plume that appeared to remain attached to the surface. We then calculate the plume characteristics from scaling laws for a variety of material types, and for gravity and strength-dominated cases. We find that no model of cratering alone can match the reported observation of plume mass and brightness history. Instead, comet-like acceleration mechanisms such as expanding vapor clouds are required to move the ejected mass to the far field in a few-hour time frame. With such mechanisms, and to within the large uncertainties, either gravity or strength craters can provide the levels of estimated observed mass. Thus, the observations are unlikely to answer the questions about the mechanical nature of the Tempel 1 surface.

  12. SU-F-T-593: Technical Treatment Accuracy in a Clinic of Fractionated Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisht, R; Kale, S; Natanasabapathi, G

    2016-06-15

    Purpose: The purpose of this study is to estimate technical treatment accuracy in fractionated stereotactic radiosurgery (fSRS) using extend system (ES) of Gamma Knife (GK). Methods: The fSRS with GK relies on a patient specific re-locatable immobilization system. The reference treatment position is estimated using a digital probe and a repositioning check tool (RCT). The “calibration values” of RCT apertures were compared with measured values on RCT-QA tool to evaluate the standard error (SE) associated with RCT measurements. A treatment plan with single “4 mm collimator shot” was created to deliver a radiation dose of 5 Gy at the predefinedmore » plane of a newly designed in-house head-neck phantom. The plan was investigated using radiochromic EBT3 films. The stereotactic CT imaging of a designed mini CT phantom and distortion study of MR imaging, were combined to calculate imaging SE. The focal precision check for GK machine tolerance was performed using a central diode test tool. Results: Twenty observations of RCT and digital probe, shown the SE of +/−0.0186mm and +/−0.0002mm respectively. A mean positional shift of 0.2752mm (σ=0.0696mm) was observed for twenty similar treatment settings of head-neck phantom. The difference between radiological and predefined exposure point was 0.4650mm and 0.4270mm; for two independent experiments. The imaging studies showed a combined SE of +/− 0.1055mm. Twenty frequent runs of a diode test tool showed the tolerance SE of +/−0.0096mm. If, the measurements are considered to be at 95% of confidence level, an expanded uncertainty was evaluated as +/− 0.2371mm with our system. The positional shift, when combined with an expanded uncertainty, a trivial variation of 0.07mm (max) was observed in comparing resultant radiological precision through film investigations. Conclusion: The study proposes an expression of “technical treatment accuracy” within “known uncertainties” is rational in the estimation of routine fSRS quality. The research work is supported by the research section of “All India Institute of Medical Sciences” - New Delhi, India under project no A-247.« less

  13. Black hole complementarity with the generalized uncertainty principle in Gravity's Rainbow

    NASA Astrophysics Data System (ADS)

    Gim, Yongwan; Um, Hwajin; Kim, Wontae

    2018-02-01

    When gravitation is combined with quantum theory, the Heisenberg uncertainty principle could be extended to the generalized uncertainty principle accompanying a minimal length. To see how the generalized uncertainty principle works in the context of black hole complementarity, we calculate the required energy to duplicate information for the Schwarzschild black hole. It shows that the duplication of information is not allowed and black hole complementarity is still valid even assuming the generalized uncertainty principle. On the other hand, the generalized uncertainty principle with the minimal length could lead to a modification of the conventional dispersion relation in light of Gravity's Rainbow, where the minimal length is also invariant as well as the speed of light. Revisiting the gedanken experiment, we show that the no-cloning theorem for black hole complementarity can be made valid in the regime of Gravity's Rainbow on a certain combination of parameters.

  14. Decay heat uncertainty quantification of MYRRHA

    NASA Astrophysics Data System (ADS)

    Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den

    2017-09-01

    MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.

  15. Using a Meniscus to Teach Uncertainty in Measurement

    NASA Astrophysics Data System (ADS)

    Backman, Philip

    2008-02-01

    I have found that students easily understand that a measurement cannot be exact, but they often seem to lack an understanding of why it is important to know something about the magnitude of the uncertainty. This tends to promote an attitude that almost any uncertainty value will do. Such indifference may exist because once an uncertainty is determined or calculated, it remains as only a number without a concrete physical connection back to the experiment. For the activity described here—presented as a challenge—groups of students are given a container and asked to make certain measurements and to estimate the uncertainty in each of those measurements. They are then challenged to complete a particular task involving the container and a volume of water. Whether the assigned task is actually achievable, however, slowly comes into question once the magnitude of the uncertainties in the original measurements is compared to the specific requirements of the challenge.

  16. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  17. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  18. Aerosol Measurements in the Mid-Atlantic: Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Hains, J. C.; Chen, L. A.; Taubman, B. F.; Dickerson, R. R.

    2006-05-01

    Elevated levels of PM2.5 are associated with cardiovascular and respiratory problems and even increased mortality rates. In 2002 we ran two commonly used PM2.5 speciation samplers (an IMPROVE sampler and an EPA sampler) in parallel at Fort Meade, Maryland (a suburban site located in the Baltimore- Washington urban corridor). The filters were analyzed at different labs. This experiment allowed us to calculate the 'real world' uncertainties associated with these instruments. The EPA method retrieved a January average PM2.5 mass of 9.3 μg/m3 with a standard deviation of 2.8 μg/m3, while the IMPROVE method retrieved an average mass of 7.3 μg/m3 with a standard deviation of 2.1 μg/m3. The EPA method retrieved a July average PM2.5 mass of 26.4 μg/m3 with a standard deviation of 14.6 μg/m3, while the IMPROVE method retrieved an average mass of 23.3 μg/m3 with a standard deviation of 13.0 μg/m3. We calculated a 5% uncertainty associated with the EPA and IMPROVE methods that accounts for uncertainties in flow control strategies and laboratory analysis. The RMS difference between the two methods in January was 2.1 μg/m3, which is about 25% of the monthly average mass and greater than the uncertainty we calculated. In July the RMS difference between the two methods was 5.2 μg/m3, about 20% of the monthly average mass, and greater than the uncertainty we calculated. The EPA methods retrieve consistently higher concentrations of PM2.5 than the IMPROVE methods on a daily basis in January and July. This suggests a systematic bias possibly resulting from contamination of either of the sampling methods. We reconstructed the mass and found that both samplers have good correlation between reconstructed and gravimetric mass, though the IMPROVE method has slightly better correlation than the EPA method. In January, organic carbon is the largest contributor to PM2.5 mass, and in July both sulfate and organic matter contribute substantially to PM2.5. Source apportionment models suggest that regional and local power plants are the major sources of sulfate, while mobile and vegetative burning factors are the major sources of organic carbon.

  19. Absolute Measurements of Field Enhanced Dielectronic Recombination and Electron Impact Excitation

    NASA Astrophysics Data System (ADS)

    Savin, Daniel Wolf

    Absolute measurements have been made of the dielectronic recombination (DR) rate coefficient for C^ {3+}, via the 2s-2p core -excitation, in an external electric field of 11.4 +/- 0.9(1sigma) V cm ^{-1}; and of the electron impact excitation (EIE) rate coefficient for C ^{3+}(2s-2p) at energies near threshold. The ion-rest-frame FWHM of the electron energy spread was 1.74 +/- 0.22(1sigma) eV. The measured DR rate, at a mean electron energy of 8.26 +/- 0.07(1sigma ) eV, was (2.76+/- 0.75)times 10^{-10} cm^{3 } s^{-1}. The uncertainty quoted for the DR rate is the total experimental uncertainty at a 1sigma<=vel. The present DR result appears to agree with an intermediate coupling calculation which uses the isolated-resonance, single-configuration approximation. In comparing with theory, a semi-classical formula was used to determine which recombined ions were field-ionized by the 4.65 kV cm^{-1} fields in the final-charge-state analyzer and not detected. A more precise treatment of field-ionization, which includes the lifetime of the high Rydberg C^{2+} ions in the external field and the time evolution and rotation of the fields experienced by the recombined ions, is needed before a definitive comparison between experiment and theory can be made. For the EIE results, at an ion-rest-frame energy of 10.10 eV, the measured rate coefficient was (7.79+/- 2.10)times 10^{ -8} cm^3 s^ {-1}. The measured cross section was (4.15+/- 1.12)times 10^{ -16} cm^2. The uncertainties quoted here represent the total experimental uncertainty at a 90 percent confidence level. Good agreement is found with other measurements. Agreement is not good with Coulomb -Born with exchange and two-state close-coupling calculations which fall outside the 90-percent-confidence uncertainty limits. Agreement is better with a nine-state close-coupling calculation which lies at the extreme of the uncertainty limits. Taking into account previous measurements in C ^{3+} and also a measurement of EIE in Be^+ which lies 19 percent below close-coupling calculations, there is a suggestion that the C^{3+}(2s-2p) EIE rate coefficient may fall slightly below presently accepted values.

  20. SU-C-9A-04: Alternative Analytic Solution to the Paralyzable Detector Model to Calculate Deadtime and Deadtime Loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less

Top