Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin
2017-04-04
The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
How predictable is the timing of a summer ice-free Arctic?
NASA Astrophysics Data System (ADS)
Jahn, Alexandra; Kay, Jennifer E.; Holland, Marika M.; Hall, David M.
2016-09-01
Climate model simulations give a large range of over 100 years for predictions of when the Arctic could first become ice free in the summer, and many studies have attempted to narrow this uncertainty range. However, given the chaotic nature of the climate system, what amount of spread in the prediction of an ice-free summer Arctic is inevitable? Based on results from large ensemble simulations with the Community Earth System Model, we show that internal variability alone leads to a prediction uncertainty of about two decades, while scenario uncertainty between the strong (Representative Concentration Pathway (RCP) 8.5) and medium (RCP4.5) forcing scenarios adds at least another 5 years. Common metrics of the past and present mean sea ice state (such as ice extent, volume, and thickness) as well as global mean temperatures do not allow a reduction of the prediction uncertainty from internal variability.
Kovilakam, Mahesh; Mahajan, Salil
2016-06-28
While black carbon aerosols (BC) are believed to modulate the Indian monsoons, the radiative forcing estimate of BC suffers from large uncertainties globally. In this paper, we analyze a suite of idealized experiments forced with a range of BC concentrations that span a large swath of the latest estimates of its global radiative forcing. Within those bounds of uncertainty, summer precipitation over the Indian region increases nearly linearly with the increase in BC burden. The linearity holds even as the BC concentration is increased to levels resembling those hypothesized in nuclear winter scenarios, despite large surface cooling over India andmore » adjoining regions. The enhanced monsoonal circulation is associated with a linear increase in the large-scale meridional tropospheric temperature gradient. The precipitable water over the region also increases linearly with an increase in BC burden, due to increased moisture transport from the Arabian sea to the land areas. The wide range of Indian monsoon response elicited in these experiments emphasizes the need to reduce the uncertainty in BC estimates to accurately quantify their role in modulating the Indian monsoons. Finally, the increase in monsoonal circulation in response to large BC concentrations contrasts earlier findings that the Indian summer monsoon may break down following a nuclear war.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Mahajan, Salil
While black carbon aerosols (BC) are believed to modulate the Indian monsoons, the radiative forcing estimate of BC suffers from large uncertainties globally. In this paper, we analyze a suite of idealized experiments forced with a range of BC concentrations that span a large swath of the latest estimates of its global radiative forcing. Within those bounds of uncertainty, summer precipitation over the Indian region increases nearly linearly with the increase in BC burden. The linearity holds even as the BC concentration is increased to levels resembling those hypothesized in nuclear winter scenarios, despite large surface cooling over India andmore » adjoining regions. The enhanced monsoonal circulation is associated with a linear increase in the large-scale meridional tropospheric temperature gradient. The precipitable water over the region also increases linearly with an increase in BC burden, due to increased moisture transport from the Arabian sea to the land areas. The wide range of Indian monsoon response elicited in these experiments emphasizes the need to reduce the uncertainty in BC estimates to accurately quantify their role in modulating the Indian monsoons. Finally, the increase in monsoonal circulation in response to large BC concentrations contrasts earlier findings that the Indian summer monsoon may break down following a nuclear war.« less
Nuttens, V E; Nahum, A E; Lucas, S
2011-01-01
Urethral NTCP has been determined for three prostates implanted with seeds based on (125)I (145 Gy), (103)Pd (125 Gy), (131)Cs (115 Gy), (103)Pd-(125)I (145 Gy), or (103)Pd-(131)Cs (115 Gy or 130 Gy). First, DU(20), meaning that 20% of the urhral volume receive a dose of at least DU(20), is converted into an I-125 LDR equivalent DU(20) in order to use the urethral NTCP model. Second, the propagation of uncertainties through the steps in the NTCP calculation was assessed in order to identify the parameters responsible for large data uncertainties. Two sets of radiobiological parameters were studied. The NTCP results all fall in the 19%-23% range and are associated with large uncertainties, making the comparison difficult. Depending on the dataset chosen, the ranking of NTCP values among the six seed implants studied changes. Moreover, the large uncertainties on the fitting parameters of the urethral NTCP model result in large uncertainty on the NTCP value. In conclusion, the use of NTCP model for permanent brachytherapy is feasible but it is essential that the uncertainties on the parameters in the model be reduced.
Predicting long-range transport: a systematic evaluation of two multimedia transport models.
Bennett, D H; Scheringer, M; McKone, T E; Hungerbühler, K
2001-03-15
The United Nations Environment Program has recently developed criteria to identify and restrict chemicals with a potential for persistence and long-range transport (persistent organic pollutants or POPs). There are many stakeholders involved, and the issues are not only scientific but also include social, economic, and political factors. This work focuses on one aspect of the POPs debate, the criteria for determining the potential for long-range transport (LRT). Our goal is to determine if current models are reliable enough to support decisions that classify a chemical based on the LRT potential. We examine the robustness of two multimedia fate models for determining the relative ranking and absolute spatial range of various chemicals in the environment. We also consider the effect of parameter uncertainties and the model uncertainty associated with the selection of an algorithm for gas-particle partitioning on the model results. Given the same chemical properties, both models give virtually the same ranking. However, when chemical parameter uncertainties and model uncertainties such as particle partitioning are considered, the spatial range distributions obtained for the individual chemicals overlap, preventing a distinct rank order. The absolute values obtained for the predicted spatial range or travel distance differ significantly between the two models for the uncertainties evaluated. We find that to evaluate a chemical when large and unresolved uncertainties exist, it is more informative to use two or more models and include multiple types of uncertainty. Model differences and uncertainties must be explicitly confronted to determine how the limitations of scientific knowledge impact predictions in the decision-making process.
NASA Astrophysics Data System (ADS)
Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.
2018-01-01
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.
Di Vittorio, A. V.; Mao, J.; Shi, X.; ...
2018-01-03
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vittorio, A. V.; Mao, J.; Shi, X.
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
Application of Linearized Kalman Filter-Smoother to Aircraft Trajectory Estimation.
1988-06-01
the report). The kinematic relationships between wind-axis Euler angles and angular rates are given below (Etkin, 1972: 150): q w OS r w s i n* * (4...I values, and those for RP-2 were chosen in order to explore less accurate range measurements combined with more accurate angular measurements. This...was of interest because of the uncertainty in position introduced by large angular measurement uncertainties at long ranges. Finally, radar models RR
Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters
NASA Astrophysics Data System (ADS)
Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project
2017-10-01
A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.
Sensitivity Analysis of Expected Wind Extremes over the Northwestern Sahara and High Atlas Region.
NASA Astrophysics Data System (ADS)
Garcia-Bustamante, E.; González-Rouco, F. J.; Navarro, J.
2017-12-01
A robust statistical framework in the scientific literature allows for the estimation of probabilities of occurrence of severe wind speeds and wind gusts, but does not prevent however from large uncertainties associated with the particular numerical estimates. An analysis of such uncertainties is thus required. A large portion of this uncertainty arises from the fact that historical observations are inherently shorter that the timescales of interest for the analysis of return periods. Additional uncertainties stem from the different choices of probability distributions and other aspects related to methodological issues or physical processes involved. The present study is focused on historical observations over the Ouarzazate Valley (Morocco) and in a high-resolution regional simulation of the wind in the area of interest. The aim is to provide extreme wind speed and wind gust return values and confidence ranges based on a systematic sampling of the uncertainty space for return periods up to 120 years.
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.
Communicating uncertainty in circulation aspects of climate change
NASA Astrophysics Data System (ADS)
Shepherd, Ted
2017-04-01
The usual way of representing uncertainty in climate change is to define a likelihood range of possible futures, conditioned on a particular pathway of greenhouse gas concentrations (RCPs). Typically these likelihood ranges are derived from multi-model ensembles. However, there is no obvious basis for treating such ensembles as probability distributions. Moreover, for aspects of climate related to atmospheric circulation, such an approach generally leads to large uncertainty and low confidence in projections. Yet this does not mean that the associated climate risks are small. We therefore need to develop suitable ways of communicating climate risk whilst acknowledging the uncertainties. This talk will outline an approach based on conditioning the purely thermodynamic aspects of climate change, concerning which there is comparatively high confidence, on circulation-related aspects, and treating the latter through non-probabilistic storylines.
Cheaib, Alissar; Badeau, Vincent; Boe, Julien; Chuine, Isabelle; Delire, Christine; Dufrêne, Eric; François, Christophe; Gritti, Emmanuel S; Legay, Myriam; Pagé, Christian; Thuiller, Wilfried; Viovy, Nicolas; Leadley, Paul
2012-06-01
Model-based projections of shifts in tree species range due to climate change are becoming an important decision support tool for forest management. However, poorly evaluated sources of uncertainty require more scrutiny before relying heavily on models for decision-making. We evaluated uncertainty arising from differences in model formulations of tree response to climate change based on a rigorous intercomparison of projections of tree distributions in France. We compared eight models ranging from niche-based to process-based models. On average, models project large range contractions of temperate tree species in lowlands due to climate change. There was substantial disagreement between models for temperate broadleaf deciduous tree species, but differences in the capacity of models to account for rising CO(2) impacts explained much of the disagreement. There was good quantitative agreement among models concerning the range contractions for Scots pine. For the dominant Mediterranean tree species, Holm oak, all models foresee substantial range expansion. © 2012 Blackwell Publishing Ltd/CNRS.
SU-F-T-185: Study of the Robustness of a Proton Arc Technique Based On PBS Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Zheng, Y
Purpose: One potential technique to realize proton arc is through using PBS beams from many directions to form overlaid Bragg peak (OBP) spots and placing these OBP spots throughout the target volume to achieve desired dose distribution. In this study, we analyzed the robustness of this proton arc technique. Methods: We used a cylindrical water phantom of 20 cm in radius in our robustness analysis. To study the range uncertainty effect, we changed the density of the phantom by ±3%. To study the setup uncertainty effect, we shifted the phantom by 3 & 5 mm. We also combined the rangemore » and setup uncertainties (3mm/±3%). For each test plan, we performed dose calculation for the nominal and 6 disturbed scenarios. Two test plans were used, one with single OBP spot and the other consisting of 121 OBP spots covering a 10×10cm{sup 2} area. We compared the dose profiles between the nominal and disturbed scenarios to estimate the impact of the uncertainties. Dose calculation was performed with Gate/GEANT based Monte Carlo software in cloud computing environment. Results: For each of the 7 scenarios, we simulated 100k & 10M events for plans consisting of single OBP spot and 121 OBP spots respectively. For single OBP spot, the setup uncertainty had minimum impact on the spot’s dose profile while range uncertainty had significant impact on the dose profile. For plan consisting of 121 OBP spots, similar effect was observed but the extent of disturbance was much less compared to single OBP spot. Conclusion: For PBS arc technique, range uncertainty has significantly more impact than setup uncertainty. Although single OBP spot can be severely disturbed by the range uncertainty, the overall effect is much less when a large number of OBP spots are used. Robustness optimization for PBS arc technique should consider range uncertainty with priority.« less
NASA Astrophysics Data System (ADS)
Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.
2017-12-01
Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.
Multi-model approach to assess the impact of climate change on runoff
NASA Astrophysics Data System (ADS)
Dams, J.; Nossent, J.; Senbeta, T. B.; Willems, P.; Batelaan, O.
2015-10-01
The assessment of climate change impacts on hydrology is subject to uncertainties related to the climate change scenarios, stochastic uncertainties of the hydrological model and structural uncertainties of the hydrological model. This paper focuses on the contribution of structural uncertainty of hydrological models to the overall uncertainty of the climate change impact assessment. To quantify the structural uncertainty of hydrological models, four physically based hydrological models (SWAT, PRMS and a semi- and fully distributed version of the WetSpa model) are set up for a catchment in Belgium. Each model is calibrated using four different objective functions. Three climate change scenarios with a high, mean and low hydrological impact are statistically perturbed from a large ensemble of climate change scenarios and are used to force the hydrological models. This methodology allows assessing and comparing the uncertainty introduced by the climate change scenarios with the uncertainty introduced by the hydrological model structure. Results show that the hydrological model structure introduces a large uncertainty on both the average monthly discharge and the extreme peak and low flow predictions under the climate change scenarios. For the low impact climate change scenario, the uncertainty range of the mean monthly runoff is comparable to the range of these runoff values in the reference period. However, for the mean and high impact scenarios, this range is significantly larger. The uncertainty introduced by the climate change scenarios is larger than the uncertainty due to the hydrological model structure for the low and mean hydrological impact scenarios, but the reverse is true for the high impact climate change scenario. The mean and high impact scenarios project increasing peak discharges, while the low impact scenario projects increasing peak discharges only for peak events with return periods larger than 1.6 years. All models suggest for all scenarios a decrease of the lowest flows, except for the SWAT model with the mean hydrological impact climate change scenario. The results of this study indicate that besides the uncertainty introduced by the climate change scenarios also the hydrological model structure uncertainty should be taken into account in the assessment of climate change impacts on hydrology. To make it more straightforward and transparent to include model structural uncertainty in hydrological impact studies, there is a need for hydrological modelling tools that allow flexible structures and methods to validate model structures in their ability to assess impacts under unobserved future climatic conditions.
Climate change adaptation and Integrated Water Resource Management in the water sector
NASA Astrophysics Data System (ADS)
Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim
2014-10-01
Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range of future scenarios in order to develop robust adaptation measures and strategies.
Transfer Standard Uncertainty Can Cause Inconclusive Inter-Laboratory Comparisons
Wright, John; Toman, Blaza; Mickan, Bodo; Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2016-01-01
Inter-laboratory comparisons use the best available transfer standards to check the participants’ uncertainty analyses, identify underestimated uncertainty claims or unknown measurement biases, and improve the global measurement system. For some measurands, instability of the transfer standard can lead to an inconclusive comparison result. If the transfer standard uncertainty is large relative to a participating laboratory’s uncertainty, the commonly used standardized degree of equivalence ≤ 1 criterion does not always correctly assess whether a participant is working within their uncertainty claims. We show comparison results that demonstrate this issue and propose several criteria for assessing a comparison result as passing, failing, or inconclusive. We investigate the behavior of the standardized degree of equivalence and alternative comparison measures for a range of values of the transfer standard uncertainty relative to the individual laboratory uncertainty values. The proposed alternative criteria successfully discerned between passing, failing, and inconclusive comparison results for the cases we examined. PMID:28090123
Space station control moment gyro control
NASA Technical Reports Server (NTRS)
Bordano, Aldo
1987-01-01
The potential large center-of-pressure to center-of-gravity offset of the space station makes the short term, within an orbit, variations in density of primary importance. The large range of uncertainty in the prediction of solar activity will penalize the design, developments, and operation of the space station.
International Intercomparison of Regular Transmittance Scales
NASA Astrophysics Data System (ADS)
Eckerle, K. L.; Sutter, E.; Freeman, G. H. C.; Andor, G.; Fillinger, L.
1990-01-01
An intercomparison of the regular spectral transmittance scales of NIST, Gaithersburg, MD (USA); PTB, Braunschweig (FRG); NPL, Teddington, Middlesex (UK); and OMH, Budapest (H) was accomplished using three sets of neutral glass filters with transmittances ranging from approximately 0.92 to 0.001. The difference between the results from the reference spectrophotometers of the laboratories was generally smaller than the total uncertainty of the interchange. The relative total uncertainty ranges from 0.05% to 0.75% for transmittances from 0.92 to 0.001. The sample-induced error was large - contributing 40% or more of the total except in a few cases.
Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa
NASA Astrophysics Data System (ADS)
Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.
2012-12-01
Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Brass, Jim (Technical Monitor)
2001-01-01
A fundamental strategy in NASA's Earth Observing System's (EOS) monitoring of vegetation and its contribution to the global carbon cycle is to rely on deterministic, process-based ecosystem models to make predictions of carbon flux over large regions. These models are parameterized (that is, the input variables are derived) using remotely sensed images such as those from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground measurements and interpolated maps. Since early applications of these models, investigators have noted that results depend partly on the spatial support of the input variables. In general, the larger the support of the input data, the greater the chance that the effects of important components of the ecosystem will be averaged out. A review of previous work shows that using large supports can cause either positive or negative bias in carbon flux predictions. To put the magnitude and direction of these biases in perspective, we must quantify the range of uncertainty on our best measurements of carbon-related variables made on equivalent areas. In other words, support-effect bias should be placed in the context of prediction uncertainty from other sources. If the range of uncertainty at the smallest support is less than the support-effect bias, more research emphasis should probably be placed on support sizes that are intermediate between those of field measurements and MODIS. If the uncertainty range at the smallest support is larger than the support-effect bias, the accuracy of MODIS-based predictions will be difficult to quantify and more emphasis should be placed on field-scale characterization and sampling. This talk will describe methods to address these issues using a field measurement campaign in North America and "upscaling" using geostatistical estimation and simulation.
Assessing uncertainties in land cover projections.
Alexander, Peter; Prestele, Reinhard; Verburg, Peter H; Arneth, Almut; Baranzelli, Claudia; Batista E Silva, Filipe; Brown, Calum; Butler, Adam; Calvin, Katherine; Dendoncker, Nicolas; Doelman, Jonathan C; Dunford, Robert; Engström, Kerstin; Eitelberg, David; Fujimori, Shinichiro; Harrison, Paula A; Hasegawa, Tomoko; Havlik, Petr; Holzhauer, Sascha; Humpenöder, Florian; Jacobs-Crisioni, Chris; Jain, Atul K; Krisztin, Tamás; Kyle, Page; Lavalle, Carlo; Lenton, Tim; Liu, Jiayi; Meiyappan, Prasanth; Popp, Alexander; Powell, Tom; Sands, Ronald D; Schaldach, Rüdiger; Stehfest, Elke; Steinbuks, Jevgenijs; Tabeau, Andrzej; van Meijl, Hans; Wise, Marshall A; Rounsevell, Mark D A
2017-02-01
Understanding uncertainties in land cover projections is critical to investigating land-based climate mitigation policies, assessing the potential of climate adaptation strategies and quantifying the impacts of land cover change on the climate system. Here, we identify and quantify uncertainties in global and European land cover projections over a diverse range of model types and scenarios, extending the analysis beyond the agro-economic models included in previous comparisons. The results from 75 simulations over 18 models are analysed and show a large range in land cover area projections, with the highest variability occurring in future cropland areas. We demonstrate systematic differences in land cover areas associated with the characteristics of the modelling approach, which is at least as great as the differences attributed to the scenario variations. The results lead us to conclude that a higher degree of uncertainty exists in land use projections than currently included in climate or earth system projections. To account for land use uncertainty, it is recommended to use a diverse set of models and approaches when assessing the potential impacts of land cover change on future climate. Additionally, further work is needed to better understand the assumptions driving land use model results and reveal the causes of uncertainty in more depth, to help reduce model uncertainty and improve the projections of land cover. © 2016 John Wiley & Sons Ltd.
Development of an automatic test equipment for nano gauging displacement transducers
NASA Astrophysics Data System (ADS)
Wang, Yung-Chen; Jywe, Wen-Yuh; Liu, Chien-Hung
2005-01-01
In order to satisfy the increasing demands on the precision in manufacturing technology, nanaometrology gradually becomes more important in manufacturing process. To ensure the precision of manufacture, precise measuring instruments and sensors play a decesive role for the accurate characterization and inspection of products. For linear length inspection, high precision gauging displacement transducers, i.e. nano gauging displacement transducers (NGDT), have been often utilized, which have been often utilized, which have the resolution in the nanometer range and can achieve an accuracy of less than 100 nm. Such measurement instruments include transducers based on electronic as well as optical measurement principles, e.g. inductive, incremental-optical or interference optical. To guarantee the accuracy and the traceability to the definition of the meter, calibration and test of NGDT are essential. Currently, there are some methods and machines for test of NGDT, but they suffer from various disadvantages. Some of them permit only manual test procedures which are time-consuming, e.g. with high accurate gauge blocks as material measures. Other tests can reach higher accuracy only in the micrometer range or result in uncertainties of more than 100 nm in the large measuring ranges. To realize the test of NGDT with a high resolution as well as a large measuring range, an automatic test equipment was constructed, that has a resolution of 1.24 nm, a measuring range of up to 20 nm (60 mm) and a measuring uncertainty of approximate ±10 nm can fulfil the requirements of high resolution within the nanometer range while simultaneously covering a large measuring range in the order of millimeters. The test system includes a stable frame, a polarization interferometer, an angle sensor, an angular control, a drive system and piezo translators. During the test procedure, the angular control and piezo translators minimize the Abbe error. For the automation of the test procedure a measuring program adhering to the measurement principle outlined in VDI/VDE 2617 guidelines was designed. With this program NGDT can be tested in less than thirty minutes with eleven measuring points and five repetitions. By mean of theoretical and experimental investigations it can be proved that the automatic test system achieves a test uncertainty of approx. ±10 nm at the measuring range of 18 mm, that corresponds to a relative uncertainty of approximately ±5 × 10-7. With small uncertainty, the minimization of the Abbe error and short test time, this system can be regarded as a universal and efficient precision test equipment, which is available for the accurate test of arbitrary high precision gauging displacement transducers.
Climate change and growth scenarios for California wildfire
A.L. Westerling; B.P. Bryant; H.K. Preisler; T.P. Holmes; H.G. Hildalgo; T. Das; S.R. Shrestha
2011-01-01
Large wildfire occurrence and burned area are modeled using hydroclimate and landsurface characteristics under a range of future climate and development scenarios. The range of uncertainty for future wildfire regimes is analyzed over two emissions pathways (the Special Report on Emissions Scenarios [SRES] A2 and B1 scenarios); three global climate models (Centre...
Bias and uncertainty of δ13CO2 isotopic mixing models
Zachary E. Kayler; Lisa Ganio; Mark Hauck; Thomas G. Pypker; Elizabeth W. Sulzman; Alan C. Mix; Barbara J. Bond
2009-01-01
The goal of this study was to evaluate how factorial combinations of two mixing models and two regression approaches (Keeling-OLS, MillerâTans-OLS, Keeling-GMR, MillerâTans-GMR) compare in small [CO2] range versus large[CO2] range regimes, with different combinations of...
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
NASA Astrophysics Data System (ADS)
Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa
2018-05-01
The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.
Bouska, Kristen; Whitledge, Gregory W.; Lant, Christopher; Schoof, Justin
2018-01-01
Land cover is an important determinant of aquatic habitat and is projected to shift with climate changes, yet climate-driven land cover changes are rarely factored into climate assessments. To quantify impacts and uncertainty of coupled climate and land cover change on warm-water fish species’ distributions, we used an ensemble model approach to project distributions of 14 species. For each species, current range projections were compared to 27 scenario-based projections and aggregated to visualize uncertainty. Multiple regression and model selection techniques were used to identify drivers of range change. Novel, or no-analogue, climates were assessed to evaluate transferability of models. Changes in total probability of occurrence ranged widely across species, from a 63% increase to a 65% decrease. Distributional gains and losses were largely driven by temperature and flow variables and underscore the importance of habitat heterogeneity and connectivity to facilitate adaptation to changing conditions. Finally, novel climate conditions were driven by mean annual maximum temperature, which stresses the importance of understanding the role of temperature on fish physiology and the role of temperature-mitigating management practices.
Caresana, Marco; Helmecke, Manuela; Kubancak, Jan; Manessi, Giacomo Paolo; Ott, Klaus; Scherpelz, Robert; Silari, Marco
2014-10-01
This paper discusses an intercomparison campaign performed in the mixed radiation field at the CERN-EU (CERF) reference field facility. Various instruments were employed: conventional and extended-range rem counters including a novel instrument called LUPIN, a bubble detector using an active counting system (ABC 1260) and two tissue-equivalent proportional counters (TEPCs). The results show that the extended range instruments agree well within their uncertainties and within 1σ with the H*(10) FLUKA value. The conventional rem counters are in good agreement within their uncertainties and underestimate H*(10) as measured by the extended range instruments and as predicted by FLUKA. The TEPCs slightly overestimate the FLUKA value but they are anyhow consistent with it when taking the comparatively large total uncertainties into account, and indicate that the non-neutron part of the stray field accounts for ∼30 % of the total H*(10). © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Exact results for the finite time thermodynamic uncertainty relation
NASA Astrophysics Data System (ADS)
Manikandan, Sreekanth K.; Krishnamurthy, Supriya
2018-03-01
We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .
How uncertain are climate model projections of water availability indicators across the Middle East?
Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil
2010-11-28
The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.
Determination of output factors for small proton therapy fields.
Fontenot, Jonas D; Newhauser, Wayne D; Bloch, Charles; White, R Allen; Titt, Uwe; Starkschall, George
2007-02-01
Current protocols for the measurement of proton dose focus on measurements under reference conditions; methods for measuring dose under patient-specific conditions have not been standardized. In particular, it is unclear whether dose in patient-specific fields can be determined more reliably with or without the presence of the patient-specific range compensator. The aim of this study was to quantitatively assess the reliability of two methods for measuring dose per monitor unit (DIMU) values for small-field treatment portals: one with the range compensator and one without the range compensator. A Monte Carlo model of the Proton Therapy Center-Houston double-scattering nozzle was created, and estimates of D/MU values were obtained from 14 simulated treatments of a simple geometric patient model. Field-specific D/MU calibration measurements were simulated with a dosimeter in a water phantom with and without the range compensator. D/MU values from the simulated calibration measurements were compared with D/MU values from the corresponding treatment simulation in the patient model. To evaluate the reliability of the calibration measurements, six metrics and four figures of merit were defined to characterize accuracy, uncertainty, the standard deviations of accuracy and uncertainty, worst agreement, and maximum uncertainty. Measuring D/MU without the range compensator provided superior results for five of the six metrics and for all four figures of merit. The two techniques yielded different results primarily because of high-dose gradient regions introduced into the water phantom when the range compensator was present. Estimated uncertainties (approximately 1 mm) in the position of the dosimeter in these regions resulted in large uncertainties and high variability in D/MU values. When the range compensator was absent, these gradients were minimized and D/MU values were less sensitive to dosimeter positioning errors. We conclude that measuring D/MU without the range compensator present provides more reliable results than measuring it with the range compensator in place.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
Effects of Parameter Uncertainty on Long-Term Simulations of Lake Alkalinity
NASA Astrophysics Data System (ADS)
Lee, Sijin; Georgakakos, Konstantine P.; Schnoor, Jerald L.
1990-03-01
A first-order second-moment uncertainty analysis has been applied to two lakes in the Adirondack Park, New York, to assess the long-term response of lakes to acid deposition. Uncertainty due to parameter error and initial condition error was considered. Because the enhanced trickle-down (ETD) model is calibrated with only 3 years of field data and is used to simulate a 50-year period, the uncertainty in the lake alkalinity prediction is relatively large. When a best estimate of parameter uncertainty is used, the annual average alkalinity is predicted to be -11 ±28 μeq/L for Lake Woods and 142 ± 139 μeq/L for Lake Panther after 50 years. Hydrologic parameters and chemical weathering rate constants contributed most to the uncertainty of the simulations. Results indicate that the uncertainty in long-range predictions of lake alkalinity increased significantly over a 5- to 10-year period and then reached a steady state.
Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis
NASA Astrophysics Data System (ADS)
Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles
2018-03-01
We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Climate data induced uncertainty in model based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Z.; Ahlström, A.; Smith, B.; Ardö, J.; Eklundh, L.; Fensholt, R.; Lehsten, V.
2016-12-01
Models used to project global vegetation and carbon cycle differ in their estimates of historical fluxes and pools. These differences arise not only from differences between models but also from differences in the environmental and climatic data that forces the models. Here we investigate the role of uncertainties in historical climate data, encapsulated by a set of six historical climate datasets. We focus on terrestrial gross primary productivity (GPP) and analyze the results from a dynamic process-based vegetation model (LPJ-GUESS) forced by six different climate datasets and two empirical datasets of GPP (derived from flux towers and remote sensing). We find that the climate induced uncertainty, defined as the difference among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 33 Pg C yr-1 globally (19% of mean GPP). The uncertainty is partitioned into the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (the data range) and the sensitivity of the modeled GPP to the driver (the ecosystem sensitivity). The analysis is performed globally and stratified into five land cover classes. We find that the dynamic vegetation model overestimates GPP, compared to empirically based GPP data over most areas, except for the tropical region. Both the simulations and empirical estimates agree that the tropical region is a disproportionate source of uncertainty in GPP estimation. This is mainly caused by uncertainties in shortwave radiation forcing, of which climate data range contributes slightly higher uncertainty than ecosystem sensitivity to shortwave radiation. We also find that precipitation dominated the climate induced uncertainty over nearly half of terrestrial vegetated surfaces, which is mainly due to large ecosystem sensitivity to precipitation. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than ecosystem sensitivity. Our study highlights the need to better constrain tropical climate and demonstrate that uncertainty caused by climatic forcing data must be considered when comparing and evaluating model results and empirical datasets.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2015-05-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Snyder, Daniel T.; Brownell, Dorie L.
1996-01-01
Suggestions for further study include (1) evaluation of the surface-runoff component of inflow to the lake; (2) use of a cross-sectional ground-water flow model to estimate ground-water inflow, outflow, and storage; (3) additional data collection to reduce the uncertainties of the hydrologic components that have large relative uncertainties; and (4) determination of long-term trends for a wide range of climatic and hydrologic conditions.
NASA Astrophysics Data System (ADS)
Zhang, Bowen; Tian, Hanqin; Lu, Chaoqun; Chen, Guangsheng; Pan, Shufen; Anderson, Christopher; Poulter, Benjamin
2017-09-01
A wide range of estimates on global wetland methane (CH4) fluxes has been reported during the recent two decades. This gives rise to urgent needs to clarify and identify the uncertainty sources, and conclude a reconciled estimate for global CH4 fluxes from wetlands. Most estimates by using bottom-up approach rely on wetland data sets, but these data sets show largely inconsistent in terms of both wetland extent and spatiotemporal distribution. A quantitative assessment of uncertainties associated with these discrepancies among wetland data sets has not been well investigated yet. By comparing the five widely used global wetland data sets (GISS, GLWD, Kaplan, GIEMS and SWAMPS-GLWD), it this study, we found large differences in the wetland extent, ranging from 5.3 to 10.2 million km2, as well as their spatial and temporal distributions among the five data sets. These discrepancies in wetland data sets resulted in large bias in model-estimated global wetland CH4 emissions as simulated by using the Dynamic Land Ecosystem Model (DLEM). The model simulations indicated that the mean global wetland CH4 emissions during 2000-2007 were 177.2 ± 49.7 Tg CH4 yr-1, based on the five different data sets. The tropical regions contributed the largest portion of estimated CH4 emissions from global wetlands, but also had the largest discrepancy. Among six continents, the largest uncertainty was found in South America. Thus, the improved estimates of wetland extent and CH4 emissions in the tropical regions and South America would be a critical step toward an accurate estimate of global CH4 emissions. This uncertainty analysis also reveals an important need for our scientific community to generate a global scale wetland data set with higher spatial resolution and shorter time interval, by integrating multiple sources of field and satellite data with modeling approaches, for cross-scale extrapolation.
Examples of Communicating Uncertainty Applied to Earthquake Hazard and Risk Products
NASA Astrophysics Data System (ADS)
Wald, D. J.
2013-12-01
When is communicating scientific modeling uncertainty effective? One viewpoint is that the answer depends on whether one is communicating hazard or risk: hazards have quantifiable uncertainties (which, granted, are often ignored), yet risk uncertainties compound uncertainties inherent in the hazard with those of the risk calculations, and are thus often larger. Larger, yet more meaningful: since risk entails societal impact of some form, consumers of such information tend to have a better grasp of the potential uncertainty ranges for loss information than they do for less-tangible hazard values (like magnitude, peak acceleration, or stream flow). I present two examples that compare and contrast communicating uncertainty for earthquake hazard and risk products. The first example is the U.S. Geological Survey's (USGS) ShakeMap system, which portrays the uncertain, best estimate of the distribution and intensity of shaking over the potentially impacted region. The shaking intensity is well constrained at seismograph locations yet is uncertain elsewhere, so shaking uncertainties are quantified and presented spatially. However, with ShakeMap, it seems that users tend to believe what they see is accurate in part because (1) considering the shaking uncertainty complicates the picture, and (2) it would not necessarily alter their decision-making. In contrast, when it comes to making earthquake-response decisions based on uncertain loss estimates, actions tend to be made only after analysis of the confidence in (or source of) such estimates. Uncertain ranges of loss estimates instill tangible images for users, and when such uncertainties become large, intuitive reality-check alarms go off, for example, when the range of losses presented become too wide to be useful. The USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, which in near-real time alerts users to the likelihood of ranges of potential fatalities and economic impact, is aimed at facilitating rapid and proportionate earthquake response. For uncertainty representation, PAGER employs an Earthquake Impact Scale (EIS) that provides simple alerting thresholds, derived from systematic analyses of past earthquake impact and response levels. The alert levels are characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). We made a conscious attempt at both simple and intuitive color-coded alerting criterion; yet, we preserve the necessary uncertainty measures (with simple histograms) by which one can gauge the likelihood for the alert to be over- or underestimated. In these hazard and loss modeling examples, both products are widely used across a range of technical as well as general audiences. Ironically, ShakeMap uncertainties--rigorously reported and portrayed for the primarily scientific portion of the audience--are rarely employed and are routinely misunderstood; for PAGER, uncertainties aimed at a wider user audience seem to be more easily digested. We discuss how differences in the way these uncertainties are portrayed may play into their acceptance and uptake, or lack thereof.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Uncertainty information in climate data records from Earth observation
NASA Astrophysics Data System (ADS)
Merchant, C. J.
2017-12-01
How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is demonstrating metrologically sound methodologies addressing this problem for four key historical CDRs. FIDUCEO methods of uncertainty analysis (which also tend to lead to improved FCDRs and CDRs) could support coherent treatment of uncertainty across FCDRs to CDRs and higher level products for a wide range of essential climate variables.
Cascetta, Furio; Palombo, Adolfo; Scalabrini, Gianfranco
2003-04-01
In this paper the metrological behavior of two different insertion flowmeters (magnetic and turbine types) in large water pipes is described. A master-slave calibration was carried out in order to estimate the overall uncertainty of the tested meters. The experimental results show that (i) the magnetic insertion tested flowmeter performs the claimed accuracy (+/- 2%) within all the flow range (20:1); (ii) the insertion turbine tested meter, instead, reaches the claimed accuracy just in the upper zone of the flow range.
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Kolstad, Erik W; Johansson, Kjell Arne
2011-03-01
Climate change is expected to have large impacts on health at low latitudes where droughts and malnutrition, diarrhea, and malaria are projected to increase. The main objective of this study was to indicate a method to assess a range of plausible health impacts of climate change while handling uncertainties in a unambiguous manner. We illustrate this method by quantifying the impacts of projected regional warming on diarrhea in this century. We combined a range of linear regression coefficients to compute projections of future climate change-induced increases in diarrhea using the results from five empirical studies and a 19-member climate model ensemble for which future greenhouse gas emissions were prescribed. Six geographical regions were analyzed. The model ensemble projected temperature increases of up to 4°C over land in the tropics and subtropics by the end of this century. The associated mean projected increases of relative risk of diarrhea in the six study regions were 8-11% (with SDs of 3-5%) by 2010-2039 and 22-29% (SDs of 9-12%) by 2070-2099. Even our most conservative estimates indicate substantial impacts from climate change on the incidence of diarrhea. Nevertheless, our main conclusion is that large uncertainties are associated with future projections of diarrhea and climate change. We believe that these uncertainties can be attributed primarily to the sparsity of empirical climate-health data. Our results therefore highlight the need for empirical data in the cross section between climate and human health.
Ku-band antenna acquisition and tracking performance study, volume 4
NASA Technical Reports Server (NTRS)
Huang, T. C.; Lindsey, W. C.
1977-01-01
The results pertaining to the tradeoff analysis and performance of the Ku-band shuttle antenna pointing and signal acquisition system are presented. The square, hexagonal and spiral antenna trajectories were investigated assuming the TDRS postulated uncertainty region and a flexible statistical model for the location of the TDRS within the uncertainty volume. The scanning trajectories, shuttle/TDRS signal parameters and dynamics, and three signal acquisition algorithms were integrated into a hardware simulation. The hardware simulation is quite flexible in that it allows for the evaluation of signal acquisition performance for an arbitrary (programmable) antenna pattern, a large range of C/N sub O's, various TDRS/shuttle a priori uncertainty distributions, and three distinct signal search algorithms.
Influence of internal variability on population exposure to hydroclimatic changes
NASA Astrophysics Data System (ADS)
Mankin, Justin S.; Viviroli, Daniel; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.; Horton, Radley M.; E Smerdon, Jason; Diffenbaugh, Noah S.
2017-04-01
Future freshwater supply, human water demand, and people’s exposure to water stress are subject to multiple sources of uncertainty, including unknown future pathways of fossil fuel and water consumption, and ‘irreducible’ uncertainty arising from internal climate system variability. Such internal variability can conceal forced hydroclimatic changes on multi-decadal timescales and near-continental spatial-scales. Using three projections of population growth, a large ensemble from a single Earth system model, and assuming stationary per capita water consumption, we quantify the likelihoods of future population exposure to increased hydroclimatic deficits, which we define as the average duration and magnitude by which evapotranspiration exceeds precipitation in a basin. We calculate that by 2060, ∽31%-35% of the global population will be exposed to >50% probability of hydroclimatic deficit increases that exceed existing hydrological storage, with up to 9% of people exposed to >90% probability. However, internal variability, which is an irreducible uncertainty in climate model predictions that is under-sampled in water resource projections, creates substantial uncertainty in predicted exposure: ∽86%-91% of people will reside where irreducible uncertainty spans the potential for both increases and decreases in sub-annual water deficits. In one population scenario, changes in exposure to large hydroclimate deficits vary from -3% to +6% of global population, a range arising entirely from internal variability. The uncertainty in risk arising from irreducible uncertainty in the precise pattern of hydroclimatic change, which is typically conflated with other uncertainties in projections, is critical for climate risk management that seeks to optimize adaptations that are robust to the full set of potential real-world outcomes.
Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model
NASA Astrophysics Data System (ADS)
Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.
2017-10-01
The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.
NASA Astrophysics Data System (ADS)
Plag, H.-P.
2009-04-01
Local Sea Level (LSL) rise is one of the major anticipated impacts of future global warming. In many low-lying and often subsiding coastal areas, an increase of local sea-surface height is likely to increase the hazards of storm surges and hurricances and to lead to major inundation. Single major disasters due to storm surges and hurricanes hitting densely populated urban areas are estimated to inflict losses in excess of 100 billion. Decision makers face a trade-off between imposing the very high costs of coastal protection, mitigation and adaptation upon today's national economies and leaving the costs of potential major disasters to future generations. Risk and vulnerability assessments in support of informed decisions require as input predictions of the range of future LSL rise with reliable estimates of uncertainties. Secular changes in LSL are the result of a mix of location-dependent factors including ocean temperature and salinity changes, ocean and atmospheric circulation changes, mass exchange of the ocean with terrestrial water storage and the cryosphere, and vertical land motion. Current aleatory uncertainties in observations relevant to past and current LSL changes combined with epistemic uncertainties in some of the forcing functions for LSL changes produce a large range of plausible future LSL trajectories. This large range hampers the development of reasonable mitigation and adaptation strategies in the coastal zone. A detailed analysis of the uncertainties helps to answer the question what and how observations could help to reduce the uncertainties. The analysis shows that the Global Geodetic Observing System (GGOS) provides valuable observations and products towards this goal. Observations of the large ice sheets can improve the constraints on the current mass balance of the cryosphere and support cryosphere model validation. Vertical land motion close to melting ice sheets are highly relevant in the validation of models for the elastic response of the Earth to glacial deloading. Combination of satellite gravity mission with ground-based observations of gravity and vertical land motion in areas with significant mass changes (both in cryosphere, land water storage, and ocean) could help to improve models of the global water and energy cycle, which ultimately improves the understanding of current LSL changes. For LSL projections, local vertical land motion given in a reference frame tied to the center of mass is an important input, which currently contributes significantly to the error budget of LSL predictions. Improvements of the terrestrial reference frame would reduce this error contribution.
NASA Astrophysics Data System (ADS)
Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.
2018-05-01
Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.
Monitoring Top-of-Atmosphere Radiative Energy Imbalance for Climate Prediction
NASA Technical Reports Server (NTRS)
Lin, Bing; Chambers, Lin H.; Stackhouse, Paul W., Jr.; Minnis, Patrick
2009-01-01
Large climate feedback uncertainties limit the prediction accuracy of the Earth s future climate with an increased CO2 atmosphere. One potential to reduce the feedback uncertainties using satellite observations of top-of-atmosphere (TOA) radiative energy imbalance is explored. Instead of solving the initial condition problem in previous energy balance analysis, current study focuses on the boundary condition problem with further considerations on climate system memory and deep ocean heat transport, which is more applicable for the climate. Along with surface temperature measurements of the present climate, the climate feedbacks are obtained based on the constraints of the TOA radiation imbalance. Comparing to the feedback factor of 3.3 W/sq m/K of the neutral climate system, the estimated feedback factor for the current climate system ranges from -1.3 to -1.0 W/sq m/K with an uncertainty of +/-0.26 W/sq m/K. That is, a positive climate feedback is found because of the measured TOA net radiative heating (0.85 W/sq m) to the climate system. The uncertainty is caused by the uncertainties in the climate memory length. The estimated time constant of the climate is large (70 to approx. 120 years), implying that the climate is not in an equilibrium state under the increasing CO2 forcing in the last century.
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Kaiser, Maria; Špačková, Olga; Rieger, Wolfgang; Disse, Markus; Straub, Daniel
2018-05-01
Planning authorities are faced with a range of questions when planning flood protection measures: is the existing protection adequate for current and future demands or should it be extended? How will flood patterns change in the future? How should the uncertainty pertaining to this influence the planning decision, e.g., for delaying planning or including a safety margin? Is it sufficient to follow a protection criterion (e.g., to protect from the 100-year flood) or should the planning be conducted in a risk-based way? How important is it for flood protection planning to accurately estimate flood frequency (changes), costs and damage? These are questions that we address for a medium-sized pre-alpine catchment in southern Germany, using a sequential Bayesian decision making framework that quantitatively addresses the full spectrum of uncertainty. We evaluate different flood protection systems considered by local agencies in a test study catchment. Despite large uncertainties in damage, cost and climate, the recommendation is robust for the most conservative approach. This demonstrates the feasibility of making robust decisions under large uncertainty. Furthermore, by comparison to a previous study, it highlights the benefits of risk-based planning over the planning of flood protection to a prescribed return period.
Global assessment of water policy vulnerability under uncertainty in water scarcity projections
NASA Astrophysics Data System (ADS)
Greve, Peter; Kahil, Taher; Satoh, Yusuke; Burek, Peter; Fischer, Günther; Tramberend, Sylvia; Byers, Edward; Flörke, Martina; Eisner, Stephanie; Hanasaki, Naota; Langan, Simon; Wada, Yoshihide
2017-04-01
Water scarcity is a critical environmental issue worldwide, which has been driven by the significant increase in water extractions during the last century. In the coming decades, climate change is projected to further exacerbate water scarcity conditions in many regions around the world. At present, one important question for policy debate is the identification of water policy interventions that could address the mounting water scarcity problems. Main interventions include investing in water storage infrastructures, water transfer canals, efficient irrigation systems, and desalination plants, among many others. This type of interventions involve long-term planning, long-lived investments and some irreversibility in choices which can shape development of countries for decades. Making decisions on these water infrastructures requires anticipating the long term environmental conditions, needs and constraints under which they will function. This brings large uncertainty in the decision-making process, for instance from demographic or economic projections. But today, climate change is bringing another layer of uncertainty that make decisions even more complex. In this study, we assess in a probabilistic approach the uncertainty in global water scarcity projections following different socioeconomic pathways (SSPs) and climate scenarios (RCPs) within the first half of the 21st century. By utilizing an ensemble of 45 future water scarcity projections based on (i) three state-of-the-art global hydrological models (PCR-GLOBWB, H08, and WaterGAP), (ii) five climate models, and (iii) three water scenarios, we have assessed changes in water scarcity and the associated uncertainty distribution worldwide. The water scenarios used here are developed by IIASA's Water Futures and Solutions (WFaS) Initiative. The main objective of this study is to improve the contribution of hydro-climatic information to effective policymaking by identifying spatial and temporal policy vulnerabilities under large uncertainty about the future socio-economic and climatic changes and to guide policymakers in charting a more sustainable pathway and avoiding maladaptive development pathways. The results show that water scarcity is increasing in up to 83% of all land area under a high-emission scenario (RCP 6.0-SSP3). Importantly, the range of uncertainty in projected water scarcity is increasing; in some regions by several orders of magnitude (e.g. sub-Saharan Africa, eastern Europe, Central Asia). This is further illustrated by focusing on a set of large river basins that will be subject both to substantial changes in basin-wide water scarcity and to strong increases in the overall range of uncertainty (e.g. the Niger, Indus, Yangtze). These conditions pose a significant challenge for water management options in those vulnerable basins, complicating decisions on needed investments in water supply infrastructure and other system improvements, and leading to the degradation of valuable resources such as non-renewable groundwater resources and water-dependent ecosystems. The results of this study call for careful and deliberative design of water policy interventions under a wide range of socio-economic and climate conditions.
Uncertainties of Mayak urine data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir
2008-01-01
For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less
New Directions: Understanding Interactions of Air Quality and Climate Change at Regional Scales
The estimates of the short-lived climate forcers’ (SLCFs) impacts and mitigation effects on the radiation balance have large uncertainty because the current global model set-ups and simulations contain simplified parameterizations and do not completely cover the full range of air...
High-precision Ru isotopic measurements by multi-collector ICP-MS.
Becker, Harry; Dalpe, Claude; Walker, Richard J
2002-06-01
Ruthenium isotopic data for a pure Aldrich ruthenium nitrate solution obtained using a Nu Plasma multi collector inductively coupled plasma-mass spectrometer (MC-ICP-MS) shows excellent agreement (better than 1 epsilon unit = 1 part in 10(4)) with data obtained by other techniques for the mass range between 96 and 101 amu. External precisions are at the 0.5-1.7 epsilon level (2sigma). Higher sensitivity for MC ICP-MS compared to negative thermal ionization mass spectrometry (N-TIMS) is offset by the uncertainties introduced by relatively large mass discrimination and instabilities in the plasma source-ion extraction region that affect the long-term reproducibility. Large mass bias correction in ICP mass spectrometry demands particular attention to be paid to the choice of normalizing isotopes. Because of its position in the mass spectrum and the large mass bias correction, obtaining precise and accurate abundance data for 104Ru by MC-ICP-MS remains difficult. Internal and external mass bias correction schemes in this mass range may show similar shortcomings if the isotope of interest does not lie within the mass range covered by the masses used for normalization. Analyses of meteorite samples show that if isobaric interferences from Mo are sufficiently large (Ru/Mo < 10(4)), uncertainties on the Mo interference correction propagate through the mass bias correction and yield inaccurate results for Ru isotopic compositions. Second-order linear corrections may be used to correct for these inaccuracies, but such results are generally less precise than N-TIMS data.
1994-04-07
detector mated to wide- angle optics to continuously view a large conical volume of space in the vicinity of the orbiting spacecraft . When a debris... large uncertainties. This lack of reliable data for debris particles in the millimeter/centimeter size range presents a problem to spacecraft designers...by smaller particles (<I mm) can be negated by the use of meteor bumpers covering the critical parts of a spacecraft , without incurring too large a
Temperature Rise and Allowable Carbon Emissions for the RCP2.6 Scenario
NASA Astrophysics Data System (ADS)
Tachiiri, K.; Hargreaves, J. C.; Annan, J. D.; Huntingford, C.; Kawamiya, M.
2012-12-01
Climate research centres are running Earth System Models (ESMs) forced by Representative Concentration Pathway (RCP) scenarios. While these GCM studies increase process based knowledge, the number of simulations is small, making it difficult to interpret the resulting distribution of responses in a probabilistic way. We use a probabilistic framework to estimate the range of future temperature change and allowable emissions for a low mitigation CO2 concentration pathway RCP 2.6. Uncertainty is initially estimated by allowing modelled equilibrium climate sensitivity, aerosol forcing and intrinsic physical and biogeochemical processes to vary within widely accepted ranges. Results are then further constrained by extensive use of contemporary measurements. Despite this, the resulting range of temperatures for RCP 2.6 remains large. The predicted peak global temperature increase, reached around 2100, from pre-industrial is 0.8 - 1.9 K and 1.0 - 1.9 K (95% range) for the unconstrained and the constrained cases, respectively. Allowable emissions at the time of peak emission period is projected as 6.0 - 10.8 PgC yr-1 and 7.4 - 10.2 PgC yr-1 for each case. After year 2100, negative net emissions are required with a probability of some 84 %, and related uncertainty in cumulative emissions is large.
Hallifax, D; Houston, J B
2009-03-01
Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2012-01-01
Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.
NASA Astrophysics Data System (ADS)
Parhi, P.; Giannini, A.; Lall, U.; Gentine, P.
2016-12-01
Assessing and managing risks posed by climate variability and change is challenging in the tropics, from both a socio-economic and a scientific perspective. Most of the vulnerable countries with a limited climate adaptation capability are in the tropics. However, climate projections, particularly of extreme precipitation, are highly uncertain there. The CMIP5 (Coupled Model Inter- comparison Project - Phase 5) inter-model range of extreme precipitation sensitivity to the global temperature under climate change is much larger in the tropics as compared to the extra-tropics. It ranges from nearly 0% to greater than 30% across models (O'Gorman 2012). The uncertainty is also large in historical gauge or satellite based observational records. These large uncertainties in the sensitivity of tropical precipitation extremes highlight the need to better understand how tropical precipitation extremes respond to warming. We hypothesize that one of the factors explaining the large uncertainty is due to differing sensitivities during different phases of warming. We consider the `growth' and `mature' phases of warming under climate variability case- typically associated with an El Niño event. In the remote tropics (away from tropical Pacific Ocean), the response of the precipitation extremes during the two phases can be through different pathways: i) a direct and fast changing radiative forcing in an atmospheric column, acting top-down due to the tropospheric warming, and/or ii) an indirect effect via changes in surface temperatures, acting bottom-up through surface water and energy fluxes. We also speculate that the insights gained here might be useful in interpreting the large sensitivity under climate change scenarios, since the physical mechanisms during the two warming phases under climate variability case, have some correspondence with an increasing and stabilized green house gas emission scenarios.
NASA Astrophysics Data System (ADS)
Montecinos, Alejandra; Davis, Sergio; Peralta, Joaquín
2018-07-01
The kinematics and dynamics of deterministic physical systems have been a foundation of our understanding of the world since Galileo and Newton. For real systems, however, uncertainty is largely present via external forces such as friction or lack of precise knowledge about the initial conditions of the system. In this work we focus on the latter case and describe the use of inference methodologies in solving the statistical properties of classical systems subject to uncertain initial conditions. In particular we describe the application of the formalism of maximum entropy (MaxEnt) inference to the problem of projectile motion, given information about the average horizontal range over many realizations. By using MaxEnt we can invert the problem and use the provided information on the average range to reduce the original uncertainty in the initial conditions. Also, additional insight into the initial condition's probabilities, and the projectile path distribution itself, can be achieved based on the value of the average horizontal range. The wide applicability of this procedure, as well as its ease of use, reveals a useful tool with which to revisit a large number of physics problems, from classrooms to frontier research.
NASA Astrophysics Data System (ADS)
Ridley, D. A.; Cain, M.; Methven, J.; Arnold, S. R.
2017-07-01
We use a Lagrangian chemical transport model with a Monte Carlo approach to determine impacts of kinetic rate uncertainties on simulated concentrations of ozone, NOy and OH in a high-altitude biomass burning plume and a low-level industrial pollution plume undergoing long-range transport. Uncertainties in kinetic rate constants yield 10-12 ppbv (5th to 95th percentile) uncertainty in the ozone concentration, dominated by reactions that cycle NO and NO2, control NOx conversion to NOy reservoir species, and key reactions contributing to O3 loss (O(1D) + H2O, HO2 + O3). Our results imply that better understanding of the peroxyacetylnitrate (PAN) thermal decomposition constant is key to predicting large-scale O3 production from fire emissions and uncertainty in the reaction of NO + O3 at low temperatures is particularly important for both the anthropogenic and biomass burning plumes. The highlighted reactions serve as a useful template for targeting new laboratory experiments aimed at reducing uncertainties in our understanding of tropospheric O3 photochemistry.
ERIC Educational Resources Information Center
Kenyon, Peter
Over the last 2 decades, the loss of population and businesses in many small, inland, and remote Australian rural communities has intensified, largely because of the stress and uncertainty of volatile world commodity markets. This manual presents a range of survival and revival strategies that some communities have used to build resilient…
Quantifying chemical uncertainties in simulations of the ISM
NASA Astrophysics Data System (ADS)
Glover, Simon
2018-06-01
The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.
Kolstad, Erik W.; Johansson, Kjell Arne
2011-01-01
Background Climate change is expected to have large impacts on health at low latitudes where droughts and malnutrition, diarrhea, and malaria are projected to increase. Objectives The main objective of this study was to indicate a method to assess a range of plausible health impacts of climate change while handling uncertainties in a unambiguous manner. We illustrate this method by quantifying the impacts of projected regional warming on diarrhea in this century. Methods We combined a range of linear regression coefficients to compute projections of future climate change-induced increases in diarrhea using the results from five empirical studies and a 19-member climate model ensemble for which future greenhouse gas emissions were prescribed. Six geographical regions were analyzed. Results The model ensemble projected temperature increases of up to 4°C over land in the tropics and subtropics by the end of this century. The associated mean projected increases of relative risk of diarrhea in the six study regions were 8–11% (with SDs of 3–5%) by 2010–2039 and 22–29% (SDs of 9–12%) by 2070–2099. Conclusions Even our most conservative estimates indicate substantial impacts from climate change on the incidence of diarrhea. Nevertheless, our main conclusion is that large uncertainties are associated with future projections of diarrhea and climate change. We believe that these uncertainties can be attributed primarily to the sparsity of empirical climate–health data. Our results therefore highlight the need for empirical data in the cross section between climate and human health. PMID:20929684
Validity of Willingness to Pay Measures under Preference Uncertainty.
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.
Validity of Willingness to Pay Measures under Preference Uncertainty
Braun, Carola; Rehdanz, Katrin; Schmidt, Ulrich
2016-01-01
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method. PMID:27096163
Research on effect of rough surface on FMCW laser radar range accuracy
NASA Astrophysics Data System (ADS)
Tao, Huirong
2018-03-01
The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Automation and uncertainty analysis of a method for in-vivo range verification in particle therapy.
Frey, K; Unholtz, D; Bauer, J; Debus, J; Min, C H; Bortfeld, T; Paganetti, H; Parodi, K
2014-10-07
We introduce the automation of the range difference calculation deduced from particle-irradiation induced β(+)-activity distributions with the so-called most-likely-shift approach, and evaluate its reliability via the monitoring of algorithm- and patient-specific uncertainty factors. The calculation of the range deviation is based on the minimization of the absolute profile differences in the distal part of two activity depth profiles shifted against each other. Depending on the workflow of positron emission tomography (PET)-based range verification, the two profiles under evaluation can correspond to measured and simulated distributions, or only measured data from different treatment sessions. In comparison to previous work, the proposed approach includes an automated identification of the distal region of interest for each pair of PET depth profiles and under consideration of the planned dose distribution, resulting in the optimal shift distance. Moreover, it introduces an estimate of uncertainty associated to the identified shift, which is then used as weighting factor to 'red flag' problematic large range differences. Furthermore, additional patient-specific uncertainty factors are calculated using available computed tomography (CT) data to support the range analysis. The performance of the new method for in-vivo treatment verification in the clinical routine is investigated with in-room PET images for proton therapy as well as with offline PET images for proton and carbon ion therapy. The comparison between measured PET activity distributions and predictions obtained by Monte Carlo simulations or measurements from previous treatment fractions is performed. For this purpose, a total of 15 patient datasets were analyzed, which were acquired at Massachusetts General Hospital and Heidelberg Ion-Beam Therapy Center with in-room PET and offline PET/CT scanners, respectively. Calculated range differences between the compared activity distributions are reported in a 2D map in beam-eye-view. In comparison to previously proposed approaches, the new most-likely-shift method shows more robust results for assessing in-vivo the range from strongly varying PET distributions caused by differing patient geometry, ion beam species, beam delivery techniques, PET imaging concepts and counting statistics. The additional visualization of the uncertainties and the dedicated weighting strategy contribute to the understanding of the reliability of observed range differences and the complexity in the prediction of activity distributions. The proposed method promises to offer a feasible technique for clinical routine of PET-based range verification.
Automation and uncertainty analysis of a method for in-vivo range verification in particle therapy
NASA Astrophysics Data System (ADS)
Frey, K.; Unholtz, D.; Bauer, J.; Debus, J.; Min, C. H.; Bortfeld, T.; Paganetti, H.; Parodi, K.
2014-10-01
We introduce the automation of the range difference calculation deduced from particle-irradiation induced β+-activity distributions with the so-called most-likely-shift approach, and evaluate its reliability via the monitoring of algorithm- and patient-specific uncertainty factors. The calculation of the range deviation is based on the minimization of the absolute profile differences in the distal part of two activity depth profiles shifted against each other. Depending on the workflow of positron emission tomography (PET)-based range verification, the two profiles under evaluation can correspond to measured and simulated distributions, or only measured data from different treatment sessions. In comparison to previous work, the proposed approach includes an automated identification of the distal region of interest for each pair of PET depth profiles and under consideration of the planned dose distribution, resulting in the optimal shift distance. Moreover, it introduces an estimate of uncertainty associated to the identified shift, which is then used as weighting factor to ‘red flag’ problematic large range differences. Furthermore, additional patient-specific uncertainty factors are calculated using available computed tomography (CT) data to support the range analysis. The performance of the new method for in-vivo treatment verification in the clinical routine is investigated with in-room PET images for proton therapy as well as with offline PET images for proton and carbon ion therapy. The comparison between measured PET activity distributions and predictions obtained by Monte Carlo simulations or measurements from previous treatment fractions is performed. For this purpose, a total of 15 patient datasets were analyzed, which were acquired at Massachusetts General Hospital and Heidelberg Ion-Beam Therapy Center with in-room PET and offline PET/CT scanners, respectively. Calculated range differences between the compared activity distributions are reported in a 2D map in beam-eye-view. In comparison to previously proposed approaches, the new most-likely-shift method shows more robust results for assessing in-vivo the range from strongly varying PET distributions caused by differing patient geometry, ion beam species, beam delivery techniques, PET imaging concepts and counting statistics. The additional visualization of the uncertainties and the dedicated weighting strategy contribute to the understanding of the reliability of observed range differences and the complexity in the prediction of activity distributions. The proposed method promises to offer a feasible technique for clinical routine of PET-based range verification.
Uncertainties in global aerosols and climate effects due to biofuel emissions
NASA Astrophysics Data System (ADS)
Kodros, J. K.; Scott, C. E.; Farina, S. C.; Lee, Y. H.; L'Orange, C.; Volckens, J.; Pierce, J. R.
2015-04-01
Aerosol emissions from biofuel combustion impact both health and climate; however, while reducing emissions through improvements to combustion technologies will improve health, the net effect on climate is largely unconstrained. In this study, we examine sensitivities in global aerosol concentration, direct radiative climate effect, and cloud-albedo aerosol indirect climate effect to uncertainties in biofuel emission factors, optical mixing-state, and model nucleation and background SOA. We use the Goddard Earth Observing System global chemical-transport model (GEOS-Chem) with TwO Moment Aerosol Sectional (TOMAS) microphysics. The emission factors include: amount, composition, size and hygroscopicity, as well as optical mixing-state properties. We also evaluate emissions from domestic coal use, which is not biofuel but is also frequently emitted from homes. We estimate the direct radiative effect assuming different mixing states (internal, core-shell, and external) with and without absorptive organic aerosol (brown carbon). We find the global-mean direct radiative effect of biofuel emissions ranges from -0.02 to +0.06 W m-2 across all simulation/mixing state combinations with regional effects in source regions ranging from -0.2 to +1.2 W m-2. The global-mean cloud-albedo aerosol indirect effect ranges from +0.01 to -0.02 W m-2 with regional effects in source regions ranging from -1.0 to -0.05 W m-2. The direct radiative effect is strongly dependent on uncertainties in emissions mass, composition, emissions aerosol size distributions and assumed optical mixing state, while the indirect effect is dependent on the emissions mass, emissions aerosol size distribution and the choice of model nucleation and secondary organic aerosol schemes. The sign and magnitude of these effects have a strong regional dependence. We conclude that the climate effects of biofuel aerosols are largely unconstrained, and the overall sign of the aerosol effects is unclear due to uncertainties in model inputs. This uncertainty limits our ability to introduce mitigation strategies aimed at reducing biofuel black carbon emissions in order to counter warming effects from greenhouse-gases. To better understand the climate impact of particle emissions from biofuel combustion, we recommend field/laboratory measurements to narrow constraints on: (1) emissions mass, (2) emission size distribution, (3) mixing state, and (4) ratio of black carbon to organic aerosol.
Uncertainties in global aerosols and climate effects due to biofuel emissions
NASA Astrophysics Data System (ADS)
Kodros, J. K.; Scott, C. E.; Farina, S. C.; Lee, Y. H.; L'Orange, C.; Volckens, J.; Pierce, J. R.
2015-08-01
Aerosol emissions from biofuel combustion impact both health and climate; however, while reducing emissions through improvements to combustion technologies will improve health, the net effect on climate is largely unconstrained. In this study, we examine sensitivities in global aerosol concentration, direct radiative climate effect, and cloud-albedo aerosol indirect climate effect to uncertainties in biofuel emission factors, optical mixing state, and model nucleation and background secondary organic aerosol (SOA). We use the Goddard Earth Observing System global chemical-transport model (GEOS-Chem) with TwO Moment Aerosol Sectional (TOMAS) microphysics. The emission factors include amount, composition, size, and hygroscopicity, as well as optical mixing-state properties. We also evaluate emissions from domestic coal use, which is not biofuel but is also frequently emitted from homes. We estimate the direct radiative effect assuming different mixing states (homogeneous, core-shell, and external) with and without absorptive organic aerosol (brown carbon). We find the global-mean direct radiative effect of biofuel emissions ranges from -0.02 to +0.06 W m-2 across all simulation/mixing-state combinations with regional effects in source regions ranging from -0.2 to +0.8 W m-2. The global-mean cloud-albedo aerosol indirect effect (AIE) ranges from +0.01 to -0.02 W m-2 with regional effects in source regions ranging from -1.0 to -0.05 W m-2. The direct radiative effect is strongly dependent on uncertainties in emissions mass, composition, emissions aerosol size distributions, and assumed optical mixing state, while the indirect effect is dependent on the emissions mass, emissions aerosol size distribution, and the choice of model nucleation and secondary organic aerosol schemes. The sign and magnitude of these effects have a strong regional dependence. We conclude that the climate effects of biofuel aerosols are largely unconstrained, and the overall sign of the aerosol effects is unclear due to uncertainties in model inputs. This uncertainty limits our ability to introduce mitigation strategies aimed at reducing biofuel black carbon emissions in order to counter warming effects from greenhouse gases. To better understand the climate impact of particle emissions from biofuel combustion, we recommend field/laboratory measurements to narrow constraints on (1) emissions mass, (2) emission size distribution, (3) mixing state, and (4) ratio of black carbon to organic aerosol.
The effect of short-range spatial variability on soil sampling uncertainty.
Van der Perk, Marcel; de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Fajgelj, Ales; Sansone, Umberto; Jeran, Zvonka; Jaćimović, Radojko
2008-11-01
This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.
Impact of inherent meteorology uncertainty on air quality ...
It is well established that there are a number of different classifications and sources of uncertainties in environmental modeling systems. Air quality models rely on two key inputs, namely, meteorology and emissions. When using air quality models for decision making, it is important to understand how uncertainties in these inputs affect the simulated concentrations. Ensembles are one method to explore how uncertainty in meteorology affects air pollution concentrations. Most studies explore this uncertainty by running different meteorological models or the same model with different physics options and in some cases combinations of different meteorological and air quality models. While these have been shown to be useful techniques in some cases, we present a technique that leverages the initial condition perturbations of a weather forecast ensemble, namely, the Short-Range Ensemble Forecast system to drive the four-dimensional data assimilation in the Weather Research and Forecasting (WRF)-Community Multiscale Air Quality (CMAQ) model with a key focus being the response of ozone chemistry and transport. Results confirm that a sizable spread in WRF solutions, including common weather variables of temperature, wind, boundary layer depth, clouds, and radiation, can cause a relatively large range of ozone-mixing ratios. Pollutant transport can be altered by hundreds of kilometers over several days. Ozone-mixing ratios of the ensemble can vary as much as 10–20 ppb
KINEMATIC DISTANCES OF GALACTIC PLANETARY NEBULAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, A. Y.; Tian, W. W.; Zhu, H.
2016-03-15
We construct H i absorption spectra for 18 planetary nebulae (PNs) and their background sources using data from the International Galactic Plane Survey. We estimate the kinematic distances of these PNs, among which 15 objects’ kinematic distances are obtained for the first time. The distance uncertainties of 13 PNs range from 10% to 50%, which is a significant improvement with uncertainties of a factor of two or three smaller than most previous distance measurements. We confirm that PN G030.2−00.1 is not a PN because of its large distance found here.
Satellite Re-entry Modeling and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Horsley, M.
2012-09-01
LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty quantification results from the recent uncontrolled re-entry of the Phobos-Grunt satellite will be presented and discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Uncertainty information in climate data records from Earth observation
NASA Astrophysics Data System (ADS)
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.
NASA Astrophysics Data System (ADS)
Su, X.; Takahashi, K.; Fujimori, S.; Hasegawa, T.; Tanaka, K.; Shiogama, H.; Emori, S.; LIU, J.; Hanasaki, N.; Hijioka, Y.; Masui, T.
2017-12-01
Large uncertainty exists in the temperature projections, including contributions from carbon cycle, climate system and aerosols. For the integrated assessment models (IAMs), like DICE, FUND and PAGE, however, the scientific uncertainties mainly rely on the distribution of (equilibrium) climate sensitivity. This study aims at evaluating the emission pathways by limiting temperature increase below 2.0 ºC or 1.5 ºC after 2100 considering scientific uncertainties, and exploring how socioeconomic indicators are affected by such scientific uncertainties. We use a stochastic version of the SCM4OPT, with an uncertainty measurement by considering alternative ranges of key parameters. Three climate cases, namely, i) base case of SSP2, ii) limiting temperature increase below 2.0 ºC after 2100 and iii) limiting temperature increase below 1.5 ºC after 2100, and three types of probabilities - i) >66% probability or likely, ii) >50% probability or more likely than not and iii) the mean of the probability distribution, are considered in the study. The results show that, i) for the 2.0ºC case, the likely CO2 reduction rate in 2100 ranges from 75.5%-102.4%, with mean value of 88.1%, and 93.0%-113.1% (mean 102.5%) for the 1.5ºC case; ii) a likely range of forcing effect is found for the 2.0 ºC case (2.7-3.9 Wm-2) due to scientific uncertainty, and 1.9-3.1 Wm-2 for the 1.5 ºC case; iii) the carbon prices within 50% confidential interval may differ a factor of 3 for both the 2.0ºC case and the 1.5 ºC case; iv) the abatement costs within 50% confidential interval may differ a factor of 4 for both the 2.0ºC case and the 1.5 ºC case. Nine C4MIP carbon cycle models and nineteen CMIP3 AOGCMs are used to account for the scientific uncertainties, following MAGICC 6.0. These uncertainties will result in a likely radiative forcing range of 6.1-7.5 Wm-2 and a likely temperature increase of 3.1-4.5 ºC in 2100 for the base case of SSP2. If we evaluate the 2 ºC target by limiting the temperature increase, a likely difference of up to 20.7 GtCO2-eq greenhouse gases (GHGs) in 2100 will occur in the assessment, or 14.4 GtCO2-eq GHGs difference for the 1.5 ºC case. The scientific uncertainties have significant impacts on evaluating costs of climate change and an appropriate representation of such uncertainties is important in the socioeconomic assessment.
Quantifying and Qualifying USGS ShakeMap Uncertainty
Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent
2008-01-01
We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions and numerous stations, depending on the density of station/data coverage. Due to these dependencies, the letter grade can change with subsequent ShakeMap revisions if more data are added or when finite-faulting dimensions are added. We emphasize that the greatest uncertainties are associated with unconstrained source dimensions for large earthquakes where the distance term in the GMPE is most uncertain; this uncertainty thus scales with magnitude (and consequently rupture dimension). Since this distance uncertainty produces potentially large uncertainties in ShakeMap ground-motion estimates, this factor dominates over compensating constraints for all but the most dense station distributions.
USDA-ARS?s Scientific Manuscript database
The parameters used for passive soil moisture retrieval algorithms reported in the literature encompass a wide range, leading to a large uncertainty in the applicability of those values. This paper presents an evaluation of the proposed parameterizations of the tau-omega model from 1) SMAP ATBD for ...
Cloud Feedbacks on Climate: A Challenging Scientific Problem
Norris, Joe
2017-12-22
One reason it has been difficult to develop suitable social and economic policies to address global climate change is that projected global warming during the coming century has a large uncertainty range. The primary physical cause of this large uncertainty range is lack of understanding of the magnitude and even sign of cloud feedbacks on the climate system. If Earth's cloudiness responded to global warming by reflecting more solar radiation back to space or allowing more terrestrial radiation to be emitted to space, this would mitigate the warming produced by increased anthropogenic greenhouse gases. Contrastingly, a cloud response that reduced solar reflection or terrestrial emission would exacerbate anthropogenic greenhouse warming. It is likely that a mixture of responses will occur depending on cloud type and meteorological regime, and at present, we do not know what the net effect will be. This presentation will explain why cloud feedbacks have been a challenging scientific problem from the perspective of theory, modeling, and observations. Recent research results on observed multidecadal cloud-atmosphere-ocean variability over the Pacific Ocean will also be shown, along with suggestions for future research.
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2015-12-01
Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.
Chang, Xiaofeng; Wang, Shiping; Cui, Shujuan; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas
2014-01-01
Alpine grassland of the Tibetan Plateau is an important component of global soil organic carbon (SOC) stocks, but insufficient field observations and large spatial heterogeneity leads to great uncertainty in their estimation. In the Three Rivers Source Region (TRSR), alpine grasslands account for more than 75% of the total area. However, the regional carbon (C) stock estimate and their uncertainty have seldom been tested. Here we quantified the regional SOC stock and its uncertainty using 298 soil profiles surveyed from 35 sites across the TRSR during 2006–2008. We showed that the upper soil (0–30 cm depth) in alpine grasslands of the TRSR stores 2.03 Pg C, with a 95% confidence interval ranging from 1.25 to 2.81 Pg C. Alpine meadow soils comprised 73% (i.e. 1.48 Pg C) of the regional SOC estimate, but had the greatest uncertainty at 51%. The statistical power to detect a deviation of 10% uncertainty in grassland C stock was less than 0.50. The required sample size to detect this deviation at a power of 90% was about 6–7 times more than the number of sample sites surveyed. Comparison of our observed SOC density with the corresponding values from the dataset of Yang et al. indicates that these two datasets are comparable. The combined dataset did not reduce the uncertainty in the estimate of the regional grassland soil C stock. This result could be mainly explained by the underrepresentation of sampling sites in large areas with poor accessibility. Further research to improve the regional SOC stock estimate should optimize sampling strategy by considering the number of samples and their spatial distribution. PMID:24819054
Zhang, Y.; McMillan, S.; Dixon, E. R.; Stringfellow, A.; Bateman, S.; Sear, D. A.
2017-01-01
Abstract Oxygen demand in river substrates providing important habitats for the early life stages of aquatic ecology, including lithophilous fish, can arise due to the oxidation of sediment‐associated organic matter. Oxygen depletion associated with this component of river biogeochemical cycling, will, in part, depend on the sources of such material. A reconnaissance survey was therefore undertaken to assess the relative contributions from bed sediment‐associated organic matter sources potentially impacting on the River Axe Special Area of Conservation (SAC), in SW England. Source fingerprinting, including Monte Carlo uncertainty analysis, suggested that the relative frequency‐weighted average median source contributions ranged between 19% (uncertainty range 0–82%) and 64% (uncertainty range 0–99%) for farmyard manures or slurries, 4% (uncertainty range 0–49%) and 35% (uncertainty range 0–100%) for damaged road verges, 2% (uncertainty range 0–100%) and 68% (uncertainty range 0–100%) for decaying instream vegetation, and 2% (full uncertainty range 0–15%) and 6% (uncertainty range 0–48%) for human septic waste. A reconnaissance survey of sediment oxygen demand (SOD) along the channel designated as a SAC yielded a mean SOD5 of 4 mg O2 g−1 dry sediment and a corresponding SOD20 of 7 mg O2 g−1 dry sediment, compared with respective ranges of 1–15 and 2–30 mg O2 g−1 dry sediment, measured by the authors for a range of river types across the UK. The findings of the reconnaissance survey were used in an agency (SW region) catchment appraisal exercise for informing targeted management to help protect the SAC. PMID:29527135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifler, Tim; Krause, Elisabeth; Dodelson, Scott
2014-05-28
Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
NASA Astrophysics Data System (ADS)
Silverman, N. L.; Maneta, M. P.
2016-06-01
Detecting long-term change in seasonal precipitation using ground observations is dependent on the representativity of the point measurement to the surrounding landscape. In mountainous regions, representativity can be poor and lead to large uncertainties in precipitation estimates at high elevations or in areas where observations are sparse. If the uncertainty in the estimate is large compared to the long-term shifts in precipitation, then the change will likely go undetected. In this analysis, we examine the minimum detectable change across mountainous terrain in western Montana, USA. We ask the question: What is the minimum amount of change that is necessary to be detected using our best estimates of precipitation in complex terrain? We evaluate the spatial uncertainty in the precipitation estimates by conditioning historic regional climate model simulations to ground observations using Bayesian inference. By using this uncertainty as a null hypothesis, we test for detectability across the study region. To provide context for the detectability calculations, we look at a range of future scenarios from the Coupled Model Intercomparison Project 5 (CMIP5) multimodel ensemble downscaled to 4 km resolution using the MACAv2-METDATA data set. When using the ensemble averages we find that approximately 65% of the significant increases in winter precipitation go undetected at midelevations. At high elevation, approximately 75% of significant increases in winter precipitation are undetectable. Areas where change can be detected are largely controlled by topographic features. Elevation and aspect are key characteristics that determine whether or not changes in winter precipitation can be detected. Furthermore, we find that undetected increases in winter precipitation at high elevation will likely remain as snow under climate change scenarios. Therefore, there is potential for these areas to offset snowpack loss at lower elevations and confound the effects of climate change on water resources.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Controlled Ascent From the Surface of an Asteroid
NASA Technical Reports Server (NTRS)
Shen, Haijun; Roithmayr, Carlos M.; Cornelius, David M.
2014-01-01
The National Aeronautics and Space Administration (NASA) is currently investigating a conceptual robotic mission to collect a small boulder up to 4 m in diameter resting on the surface of a large Near Earth Asteroid (NEA). Because most NEAs are not well characterized, a great range of uncertainties in boulder mass properties and NEA surface characteristics must be considered in the design of this mission. These uncertainties are especially significant when the spacecraft ascends with the boulder in tow. The most important requirement during ascent is to keep the spacecraft in an upright posture to maintain healthy ground clearances for the two large solar arrays. This paper focuses on the initial stage (the first 50 m) of ascent from the surface. Specifically, it presents a sensitivity study of the solar array ground clearance, control authority, and accelerations at the array tips in the presence of a variety of uncertainties including various boulder sizes, densities, shapes and orientations, locations of the true center of mass, and push-off force distributions. Results are presented, and appropriate operations are recommended in the event some of the off-nominal cases occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, D.; Gurney, Kevin R.; Geethakumar, Sarath
2013-04-01
In this study we present onroad fossil fuel CO2 emissions estimated by the Vulcan Project, an effort quantifying fossil fuel CO2 emissions for the U.S. in high spatial and temporal resolution. This high-resolution data, aggregated at the state-level and classified in broad road and vehicle type categories, is compared to a commonly used national-average approach. We find that the use of national averages incurs state-level biases for road groupings that are almost twice as large as for vehicle groupings. The uncertainty for all groups exceeds the bias, and both quantities are positively correlated with total state emissions. States with themore » largest emissions totals are typically similar to one another in terms of emissions fraction distribution across road and vehicle groups, while smaller-emitting states have a wider range of variation in all groups. Errors in reduction estimates as large as ±60% corresponding to ±0.2 MtC are found for a national-average emissions mitigation strategy focused on a 10% emissions reduction from a single vehicle class, such as passenger gas vehicles or heavy diesel trucks. Recommendations are made for reducing CO2 emissions uncertainty by addressing its main drivers: VMT and fuel efficiency uncertainty.« less
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin
2018-03-01
Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Uncertainty estimation of long-range ensemble forecasts of snowmelt flood characteristics
NASA Astrophysics Data System (ADS)
Kuchment, L.
2012-04-01
Long-range forecasts of snowmelt flood characteristics with the lead time of 2-3 months have important significance for regulation of flood runoff and mitigation of flood damages at almost all large Russian rivers At the same time, the application of current forecasting techniques based on regression relationships between the runoff volume and the indexes of river basin conditions can lead to serious errors in forecasting resulted in large economic losses caused by wrong flood regulation. The forecast errors can be caused by complicated processes of soil freezing and soil moisture redistribution, too high rate of snow melt, large liquid precipitation before snow melt. or by large difference of meteorological conditions during the lead-time periods from climatologic ones. Analysis of economic losses had shown that the largest damages could, to a significant extent, be avoided if the decision makers had an opportunity to take into account predictive uncertainty and could use more cautious strategies in runoff regulation. Development of methodology of long-range ensemble forecasting of spring/summer floods which is based on distributed physically-based runoff generation models has created, in principle, a new basis for improving hydrological predictions as well as for estimating their uncertainty. This approach is illustrated by forecasting of the spring-summer floods at the Vyatka River and the Seim River basins. The application of the physically - based models of snowmelt runoff generation give a essential improving of statistical estimates of the deterministic forecasts of the flood volume in comparison with the forecasts obtained from the regression relationships. These models had been used also for the probabilistic forecasts assigning meteorological inputs during lead time periods from the available historical daily series, and from the series simulated by using a weather generator and the Monte Carlo procedure. The weather generator consists of the stochastic models of daily temperature and precipitation. The performance of the probabilistic forecasts were estimated by the ranked probability skill scores. The application of Monte Carlo simulations using weather generator has given better results then using the historical meteorological series.
Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2013-01-01
Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
NASA Astrophysics Data System (ADS)
Murphy, Conor; Bastola, Satish; Sweeney, John
2013-04-01
Climate change impact and adaptation assessments have traditionally adopted a 'top-down' scenario based approach, where information from different Global Climate Models (GCMs) and emission scenarios are employed to develop impacts led adaptation strategies. Due to the tradeoffs in the computational cost and need to include a wide range of GCMs for fuller characterization of uncertainties, scenarios are better used for sensitivity testing and adaptation options appraisal. One common approach to adaptation that has been defined as robust is the use of safety margins. In this work the sensitivity of safety margins that have been adopted by the agency responsible for flood risk management in Ireland, to the uncertainty in future projections are examined. The sensitivity of fluvial flood risk to climate change is assessed for four Irish catchments using a large number of GCMs (17) forced with three emissions scenarios (SRES A1B, A2, B1) as input to four hydrological models. Both uncertainty within and between hydrological models is assessed using the GLUE framework. Regionalisation is achieved using a change factor method to infer changes in the parameters of a weather generator using monthly output from the GCMs, while flood frequency analysis is conducted using the method of probability weighted moments to fit the Generalised Extreme Value distribution to ~20,000 annual maxima series. The sensitivity of design margins to the uncertainty space considered is visualised using risk response surfaces. The hydrological sensitivity is measured as the percentage change in flood peak for specified recurrence intervals. Results indicate that there is a considerable residual risk associated with allowances of +20% when uncertainties are accounted for and that the risk of exceedence of design allowances is greatest for more extreme, low frequency events with considerable implication for critical infrastructure, e.g., culverts, bridges, flood defences whose designs are normally associated with such return periods. Sensitivity results show that the impact of climate change is not as great for flood peaks with higher return periods. The average width of the uncertainty range and the size of the range for each catchment reveals that the uncertainties in low frequency events are greater than high frequency events. In addition, the uncertainty interval, estimated as the average width of the uncertainty range of flow for the five return periods, grows wider with a decrease in the runoff coefficient and wetness index of each catchment, both of which tend to increase the nonlinearity in the rainfall response. A key management question that emerges is the acceptability of residual risk where high exposure of vulnerable populations and/or critical infrastructure coincide with high costs of additional capacity in safety margins.
Robustness for slope stability modelling under deep uncertainty
NASA Astrophysics Data System (ADS)
Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten
2015-04-01
Landslides can have large negative societal and economic impacts, such as loss of life and damage to infrastructure. However, the ability of slope stability assessment to guide management is limited by high levels of uncertainty in model predictions. Many of these uncertainties cannot be easily quantified, such as those linked to climate change and other future socio-economic conditions, restricting the usefulness of traditional decision analysis tools. Deep uncertainty can be managed more effectively by developing robust, but not necessarily optimal, policies that are expected to perform adequately under a wide range of future conditions. Robust strategies are particularly valuable when the consequences of taking a wrong decision are high as is often the case of when managing natural hazard risks such as landslides. In our work a physically based numerical model of hydrologically induced slope instability (the Combined Hydrology and Stability Model - CHASM) is applied together with robust decision making to evaluate the most important uncertainties (storm events, groundwater conditions, surface cover, slope geometry, material strata and geotechnical properties) affecting slope stability. Specifically, impacts of climate change on long-term slope stability are incorporated, accounting for the deep uncertainty in future climate projections. Our findings highlight the potential of robust decision making to aid decision support for landslide hazard reduction and risk management under conditions of deep uncertainty.
Hannouche, Ali; Chebbo, Ghassan; Joannis, Claude; Gasperi, Johnny; Gromaire, Marie-Christine; Moilleron, Régis; Barraud, Sylvie; Ruban, Véronique
2017-12-01
This article describes a stochastic method to calculate the annual pollutant loads and its application over several years at the outlet of three catchments drained by separate storm sewers. A stochastic methodology using Monte Carlo simulations is proposed for assessing annual pollutant load, as well as the associated uncertainties, from a few event sampling campaigns and/or continuous turbidity measurements (representative of the total suspended solids concentration (TSS)). Indeed, in the latter case, the proposed method takes into account the correlation between pollutants and TSS. The developed method was applied to data acquired within the French research project "INOGEV" (innovations for a sustainable management of urban water) at the outlet of three urban catchments drained by separate storm sewers. Ten or so event sampling campaigns for a large range of pollutants (46 pollutants and 2 conventional water quality parameters: TSS and total organic carbon (TOC)) are combined with hundreds of rainfall events for which, at least one among three continuously monitored parameters (rainfall intensity, flow rate, and turbidity) is available. Results obtained for the three catchments show that the annual pollutant loads can be estimated with uncertainties ranging from 10 to 60%, and the added value of turbidity monitoring for lowering the uncertainty is demonstrated. A low inter-annual and inter-site variability of pollutant loads, for many of studied pollutants, is observed with respect to the estimated uncertainties, and can be explained mainly by annual precipitation.
Effect of monthly areal rainfall uncertainty on streamflow simulation
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.; Mkhize, N.
2017-08-01
Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic monthly rainfalls were 86 and 90% of the mean naturalised streamflow. In calibration, 33% of the naturalised flow located within the streamflow ranges with historic rainfall simulations and using stochastic rainfalls increased this to 66%. In validation the respective percentages of naturalised flows located within the simulated streamflow ranges were 32 and 72% respectively. The analysis reveals that monthly areal rainfall uncertainty is significant and incorporating it into streamflow simulation would add validity to the results.
Jean-Christophe Domec; John S. King; Eric Ward; A. Christopher Oishi; Sari Palmroth; Andrew Radecki; Dave M. Bell; Guofang Miao; Michael Gavazzi; Daniel M. Johnson; Steve G. McNulty; Ge Sun; Asko Noormets
2015-01-01
Throughout the southern US, past forest management practices have replaced large areas of native forests with loblolly pine plantations and have resulted in changes in forest response to extreme weather conditions. However, uncertainty remains about the response of planted versus natural species to drought across the geographical range of these forests. Taking...
Uncertainty of exploitation estimates made from tag returns
Miranda, L.E.; Brock, R.E.; Dorr, B.S.
2002-01-01
Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.
Essential information: Uncertainty and optimal control of Ebola outbreaks
Li, Shou-Li; Bjornstad, Ottar; Ferrari, Matthew J.; Mummah, Riley; Runge, Michael C.; Fonnesbeck, Christopher J.; Tildesley, Michael J.; Probert, William J. M.; Shea, Katriona
2017-01-01
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
Essential information: Uncertainty and optimal control of Ebola outbreaks.
Li, Shou-Li; Bjørnstad, Ottar N; Ferrari, Matthew J; Mummah, Riley; Runge, Michael C; Fonnesbeck, Christopher J; Tildesley, Michael J; Probert, William J M; Shea, Katriona
2017-05-30
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
Stability of some epoxy-encapsulated diode thermometers
NASA Technical Reports Server (NTRS)
Mangum, B. W.; Evans, G. A., Jr.
1986-01-01
The stability upon thermal cycling and handling of ten small, epoxy-encapsulated silicon diode thermometers at six temperatures in the range from liquid nitrogen temperatures to about 60 C. The nominal temperatures of measurement were -196, -78, 0, 20, 40, and 60 C, as measured on the International Practical Temperature Scale of 1968. Diodes were to be thermally cycled 15 to 20 times. Since NASA anticipates that the uncertainty in their temperature measurements will be + or - 50 mK, uncertainties as large as + or - 10 mK in the measurements of the evaluaton can be accommodated without deleteriously affecting the value of the results of the investigation.
Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty
NASA Astrophysics Data System (ADS)
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.
Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L
2012-01-01
The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123
Forecasting eruption size: what we know, what we don't know
NASA Astrophysics Data System (ADS)
Papale, Paolo
2017-04-01
Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.
NASA Astrophysics Data System (ADS)
Loibl, Wolfgang; Peters-Anders, Jan; Züger, Johann
2010-05-01
To achieve public awareness and thorough understanding about expected climate changes and their future implications, ways have to be found to communicate model outputs to the public in a scientifically sound and easily understandable way. The newly developed Climate Twins tool tries to fulfil these requirements via an intuitively usable web application, which compares spatial patterns of current climate with future climate patterns, derived from regional climate model results. To get a picture of the implications of future climate in an area of interest, users may click on a certain location within an interactive map with underlying future climate information. A second map depicts the matching Climate Twin areas according to current climate conditions. In this way scientific output can be communicated to the public which allows for experiencing climate change through comparison with well-known real world conditions. To identify climatic coincidence seems to be a simple exercise, but the accuracy and applicability of the similarity identification depends very much on the selection of climate indicators, similarity conditions and uncertainty ranges. Too many indicators representing various climate characteristics and too narrow uncertainty ranges will judge little or no area as regions with similar climate, while too little indicators and too wide uncertainty ranges will address too large regions as those with similar climate which may not be correct. Similarity cannot be just explored by comparing mean values or by calculating correlation coefficients. As climate change triggers an alteration of various indicators, like maxima, minima, variation magnitude, frequency of extreme events etc., the identification of appropriate similarity conditions is a crucial question to be solved. For Climate Twins identification, it is necessary to find a right balance of indicators, similarity conditions and uncertainty ranges, unless the results will be too vague conducting a useful Climate Twins regions search. The Climate Twins tool works actually comparing future climate conditions of a certain source area in the Greater Alpine Region with current climate conditions of entire Europe and the neighbouring southern as well south-eastern areas as target regions. A next version will integrate web crawling features for searching information about climate-related local adaptations observed today in the target region which may turn out as appropriate solution for the source region under future climate conditions. The contribution will present the current tool functionally and will discuss which indicator sets, similarity conditions and uncertainty ranges work best to deliver scientifically sound climate comparisons and distinct mapping results.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Gedamke, Jason; Gales, Nick; Frydman, Sascha
2011-01-01
The potential for seismic airgun "shots" to cause acoustic trauma in marine mammals is poorly understood. There are just two empirical measurements of temporary threshold shift (TTS) onset levels from airgun-like sounds in odontocetes. Considering these limited data, a model was developed examining the impact of individual variability and uncertainty on risk assessment of baleen whale TTS from seismic surveys. In each of 100 simulations: 10000 "whales" are assigned TTS onset levels accounting for: inter-individual variation; uncertainty over the population's mean; and uncertainty over weighting of odontocete data to obtain baleen whale onset levels. Randomly distributed whales are exposed to one seismic survey passage with cumulative exposure level calculated. In the base scenario, 29% of whales (5th/95th percentiles of 10%/62%) approached to 1-1.2 km range were exposed to levels sufficient for TTS onset. By comparison, no whales are at risk outside 0.6 km when uncertainty and variability are not considered. Potentially "exposure altering" parameters (movement, avoidance, surfacing, and effective quiet) were also simulated. Until more research refines model inputs, the results suggest a reasonable likelihood that whales at a kilometer or more from seismic surveys could potentially be susceptible to TTS and demonstrate that the large impact uncertainty and variability can have on risk assessment.
Uncertainty characterization of HOAPS 3.3 latent heat-flux-related parameters
NASA Astrophysics Data System (ADS)
Liman, Julian; Schröder, Marc; Fennig, Karsten; Andersson, Axel; Hollmann, Rainer
2018-03-01
Latent heat flux (LHF) is one of the main contributors to the global energy budget. As the density of in situ LHF measurements over the global oceans is generally poor, the potential of remotely sensed LHF for meteorological applications is enormous. However, to date none of the available satellite products have included estimates of systematic, random, and sampling uncertainties, all of which are essential for assessing their quality. Here, the challenge is taken on by matching LHF-related pixel-level data of the Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite (HOAPS) climatology (version 3.3) to in situ measurements originating from a high-quality data archive of buoys and selected ships. Assuming the ground reference to be bias-free, this allows for deriving instantaneous systematic uncertainties as a function of four atmospheric predictor variables. The approach is regionally independent and therefore overcomes the issue of sparse in situ data densities over large oceanic areas. Likewise, random uncertainties are derived, which include not only a retrieval component but also contributions from in situ measurement noise and the collocation procedure. A recently published random uncertainty decomposition approach is applied to isolate the random retrieval uncertainty of all LHF-related HOAPS parameters. It makes use of two combinations of independent data triplets of both satellite and in situ data, which are analysed in terms of their pairwise variances of differences. Instantaneous uncertainties are finally aggregated, allowing for uncertainty characterizations on monthly to multi-annual timescales. Results show that systematic LHF uncertainties range between 15 and 50 W m-2 with a global mean of 25 W m-2. Local maxima are mainly found over the subtropical ocean basins as well as along the western boundary currents. Investigations indicate that contributions from qa (U) to the overall LHF uncertainty are on the order of 60 % (25 %). From an instantaneous point of view, random retrieval uncertainties are specifically large over the subtropics with a global average of 37 W m-2. In a climatological sense, their magnitudes become negligible, as do respective sampling uncertainties. Regional and seasonal analyses suggest that largest total LHF uncertainties are seen over the Gulf Stream and the Indian monsoon region during boreal winter. In light of the uncertainty measures, the observed continuous global mean LHF increase up to 2009 needs to be treated with caution. The demonstrated approach can easily be transferred to other satellite retrievals, which increases the significance of the present work.
The Uncertainty of Local Background Magnetic Field Orientation in Anisotropic Plasma Turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerick, F.; Saur, J.; Papen, M. von, E-mail: felix.gerick@uni-koeln.de
In order to resolve and characterize anisotropy in turbulent plasma flows, a proper estimation of the background magnetic field is crucially important. Various approaches to calculating the background magnetic field, ranging from local to globally averaged fields, are commonly used in the analysis of turbulent data. We investigate how the uncertainty in the orientation of a scale-dependent background magnetic field influences the ability to resolve anisotropy. Therefore, we introduce a quantitative measure, the angle uncertainty, that characterizes the uncertainty of the orientation of the background magnetic field that turbulent structures are exposed to. The angle uncertainty can be used asmore » a condition to estimate the ability to resolve anisotropy with certain accuracy. We apply our description to resolve the spectral anisotropy in fast solar wind data. We show that, if the angle uncertainty grows too large, the power of the turbulent fluctuations is attributed to false local magnetic field angles, which may lead to an incorrect estimation of the spectral indices. In our results, an apparent robustness of the spectral anisotropy to false local magnetic field angles is observed, which can be explained by a stronger increase of power for lower frequencies when the scale of the local magnetic field is increased. The frequency-dependent angle uncertainty is a measure that can be applied to any turbulent system.« less
How uncertain is model-based prediction of copper loads in stormwater runoff?
Lindblom, E; Ahlman, S; Mikkelsen, P S
2007-01-01
In this paper, we conduct a systematic analysis of the uncertainty related with estimating the total load of pollution (copper) from a separate stormwater drainage system, conditioned on a specific combination of input data, a dynamic conceptual pollutant accumulation-washout model and measurements (runoff volumes and pollutant masses). We use the generalized likelihood uncertainty estimation (GLUE) methodology and generate posterior parameter distributions that result in model outputs encompassing a significant number of the highly variable measurements. Given the applied pollution accumulation-washout model and a total of 57 measurements during one month, the total predicted copper masses can be predicted within a range of +/-50% of the median value. The message is that this relatively large uncertainty should be acknowledged in connection with posting statements about micropollutant loads as estimated from dynamic models, even when calibrated with on-site concentration data.
Conflict in a changing climate
NASA Astrophysics Data System (ADS)
Carleton, T.; Hsiang, S. M.; Burke, M.
2016-05-01
A growing body of research illuminates the role that changes in climate have had on violent conflict and social instability in the recent past. Across a diversity of contexts, high temperatures and irregular rainfall have been causally linked to a range of conflict outcomes. These findings can be paired with climate model output to generate projections of the impact future climate change may have on conflicts such as crime and civil war. However, there are large degrees of uncertainty in such projections, arising from (i) the statistical uncertainty involved in regression analysis, (ii) divergent climate model predictions, and (iii) the unknown ability of human societies to adapt to future climate change. In this article, we review the empirical evidence of the climate-conflict relationship, provide insight into the likely extent and feasibility of adaptation to climate change as it pertains to human conflict, and discuss new methods that can be used to provide projections that capture these three sources of uncertainty.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
NASA Astrophysics Data System (ADS)
Kirchengast, Gottfried; Li, Ying; Scherllin-Pirscher, Barbara; Schwärz, Marc; Schwarz, Jakob; Nielsen, Johannes K.
2017-04-01
The GNSS radio occultation (RO) technique is an important remote sensing technique for obtaining thermodynamic profiles of temperature, humidity, and pressure in the Earth's troposphere. However, due to refraction effects of both dry ambient air and water vapor in the troposphere, retrieval of accurate thermodynamic profiles at these lower altitudes is challenging and requires suitable background information in addition to the RO refractivity information. Here we introduce a new moist air retrieval algorithm aiming to improve the quality and robustness of retrieving temperature, humidity and pressure profiles in moist air tropospheric conditions. The new algorithm consists of four steps: (1) use of prescribed specific humidity and its uncertainty to retrieve temperature and its associated uncertainty; (2) use of prescribed temperature and its uncertainty to retrieve specific humidity and its associated uncertainty; (3) use of the previous results to estimate final temperature and specific humidity profiles through optimal estimation; (4) determination of air pressure and density profiles from the results obtained before. The new algorithm does not require elaborated matrix inversions which are otherwise widely used in 1D-Var retrieval algorithms, and it allows a transparent uncertainty propagation, whereby the uncertainties of prescribed variables are dynamically estimated accounting for their spatial and temporal variations. Estimated random uncertainties are calculated by constructing error covariance matrices from co-located ECMWF short-range forecast and corresponding analysis profiles. Systematic uncertainties are estimated by empirical modeling. The influence of regarding or disregarding vertical error correlations is quantified. The new scheme is implemented with static input uncertainty profiles in WEGC's current OPSv5.6 processing system and with full scope in WEGC's next-generation system, the Reference Occultation Processing System (rOPS). Results from both WEGC systems, current OPSv5.6 and next-generation rOPS, are shown and discussed, based on both insights from individual profiles and statistical ensembles, and compared to moist air retrieval results from the UCAR Boulder and ROM-SAF Copenhagen centers. The results show that the new algorithmic scheme improves the temperature, humidity and pressure retrieval performance, in particular also the robustness including for integrated uncertainty estimation for large-scale applications, over the previous algorithms. The new rOPS-implemented algorithm will therefore be used in the first large-scale reprocessing towards a tropospheric climate data record 2001-2016 by the rOPS, including its integrated uncertainty propagation.
Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting
NASA Astrophysics Data System (ADS)
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
NASA Astrophysics Data System (ADS)
Rogelj, J.; McCollum, D. L.; Reisinger, A.; Knutti, R.; Riahi, K.; Meinshausen, M.
2013-12-01
The field of integrated assessment draws from a large body of knowledge across a range of disciplines to gain robust insights about possible interactions, trade-offs, and synergies. Integrated assessment of climate change, for example, uses knowledge from the fields of energy system science, economics, geophysics, demography, climate change impacts, and many others. Each of these fields comes with its associated caveats and uncertainties, which should be taken into account when assessing any results. The geophysical system and its associated uncertainties are often represented by models of reduced complexity in integrated assessment modelling frameworks. Such models include simple representations of the carbon-cycle and climate system, and are often based on the global energy balance equation. A prominent example of such model is the 'Model for the Assessment of Greenhouse Gas Induced Climate Change', MAGICC. Here we show how a model like MAGICC can be used for the representation of geophysical uncertainties. Its strengths, weaknesses, and limitations are discussed and illustrated by means of an analysis which attempts to integrate socio-economic and geophysical uncertainties. These uncertainties in the geophysical response of the Earth system to greenhouse gases remains key for estimating the cost of greenhouse gas emission mitigation scenarios. We look at uncertainties in four dimensions: geophysical, technological, social and political. Our results indicate that while geophysical uncertainties are an important factor influencing projections of mitigation costs, political choices that delay mitigation by one or two decades a much more pronounced effect.
NASA Technical Reports Server (NTRS)
Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon;
2014-01-01
Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Doppler centroid estimation ambiguity for synthetic aperture radars
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1989-01-01
A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.
Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Lionello, Piero
2014-12-01
In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS has a clear skill in predicting the actual probability distribution of sea level, and it outperforms simple "dressed" PF methods. A probability estimate based on the single DF is shown to be inadequate. However, a PF obtained with a prescribed Gaussian distribution and centered on the DF value performs very similarly to the EPS-based PF.
NASA Astrophysics Data System (ADS)
Chen, X.; Huang, G.
2017-12-01
In recent years, distributed hydrological models have been widely used in storm water management, water resources protection and so on. Therefore, how to evaluate the uncertainty of the model reasonably and efficiently becomes a hot topic today. In this paper, the soil and water assessment tool (SWAT) model is constructed for the study area of China's Feilaixia watershed, and the uncertainty of the runoff simulation is analyzed by GLUE method deeply. Taking the initial parameter range of GLUE method as the research core, the influence of different initial parameter ranges on model uncertainty is studied. In this paper, two sets of parameter ranges are chosen as the object of study, the first one (range 1) is recommended by SWAT-CUP and the second one (range 2) is calibrated by SUFI-2. The results showed that under the same number of simulations (10,000 times), the overall uncertainty obtained by the range 2 is less than the range 1. Specifically, the "behavioral" parameter sets for the range 2 is 10000 and for the range 1 is 4448. In the calibration and the validation, the ratio of P-factor to R-factor for range 1 is 1.387 and 1.391, and for range 2 is 1.405 and 1.462 respectively. In addition, the simulation result of range 2 is better with the NS and R2 slightly higher than range 1. Therefore, it can be concluded that using the parameter range calibrated by SUFI-2 as the initial parameter range for the GLUE is a way to effectively capture and evaluate the simulation uncertainty.
Uncertainty, ensembles and air quality dispersion modeling: applications and challenges
NASA Astrophysics Data System (ADS)
Dabberdt, Walter F.; Miller, Erik
The past two decades have seen significant advances in mesoscale meteorological modeling research and applications, such as the development of sophisticated and now widely used advanced mesoscale prognostic models, large eddy simulation models, four-dimensional data assimilation, adjoint models, adaptive and targeted observational strategies, and ensemble and probabilistic forecasts. Some of these advances are now being applied to urban air quality modeling and applications. Looking forward, it is anticipated that the high-priority air quality issues for the near-to-intermediate future will likely include: (1) routine operational forecasting of adverse air quality episodes; (2) real-time high-level support to emergency response activities; and (3) quantification of model uncertainty. Special attention is focused here on the quantification of model uncertainty through the use of ensemble simulations. Application to emergency-response dispersion modeling is illustrated using an actual event that involved the accidental release of the toxic chemical oleum. Both surface footprints of mass concentration and the associated probability distributions at individual receptors are seen to provide valuable quantitative indicators of the range of expected concentrations and their associated uncertainty.
NASA Astrophysics Data System (ADS)
Ménesguen, Y.; Gerlach, M.; Pollakowski, B.; Unterumsberger, R.; Haschke, M.; Beckhoff, B.; Lépy, M.-C.
2016-02-01
The knowledge of atomic fundamental parameters such as mass attenuation coefficients with low uncertainties, is of decisive importance in elemental quantification using x-ray fluorescence analysis techniques. Several databases are accessible and frequently used within a large community of users. These compilations are most often in good agreement for photon energies in the hard x-ray ranges. However, they significantly differ for low photon energies and around the absorption edges of any element. In a joint cooperation of the metrology institutes of France and Germany, mass attenuation coefficients of copper and zinc were determined experimentally in the photon energy range from 100 eV to 30 keV by independent approaches using monochromatized synchrotron radiation at SOLEIL (France) and BESSY II (Germany), respectively. The application of high-accuracy experimental techniques resulted in mass attenuation coefficient datasets determined with low uncertainties that are directly compared to existing databases. The novel datasets are expected to enhance the reliability of mass attenuation coefficients.
Uncertainties in climate change projections for viticulture in Portugal
NASA Astrophysics Data System (ADS)
Fraga, Helder; Malheiro, Aureliano C.; Moutinho-Pereira, José; Pinto, Joaquim G.; Santos, João A.
2013-04-01
The assessment of climate change impacts on viticulture is often carried out using regional climate model (RCM) outputs. These studies rely on either multi-model ensembles or on single-model approaches. The RCM-ensembles account for uncertainties inherent to the different models. In this study, using a 16-RCM ensemble under the IPCC A1B scenario, the climate change signal (future minus recent-past, 2041-2070 - 1961-2000) of 4 bioclimatic indices (Huglin Index - HI, Dryness Index - DI, Hydrothermal Index - HyI and CompI - Composite Index) over mainland Portugal is analysed. A normalized interquartile range (NIQR) of the 16-member ensemble for each bioclimatic index is assessed in order to quantify the ensemble uncertainty. The results show significant increases in the HI index over most of Portugal, with higher values in Alentejo, Trás-os-Montes and Douro/Porto wine regions, also depicting very low uncertainty. Conversely, the decreases in the DI pattern throughout the country show large uncertainties, except in Minho (northwestern Portugal), where precipitation reaches the highest amounts in Portugal. The HyI shows significant decreases in northwestern Portugal, with relatively low uncertainty all across the country. The CompI depicts significant decreases over Alentejo and increases over Minho, though decreases over Alentejo reveal high uncertainty, while increases over Minho show low uncertainty. The assessment of the uncertainty in climate change projections is of great relevance for the wine industry. Quantifying this uncertainty is crucial, since different models may lead to quite different outcomes and may thereby be as crucial as climate change itself to the winemaking sector. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692.
Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...
2006-10-05
[1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less
Zhao, Wei; Ji, Songbai
2017-04-01
Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29-17.89 % in the whole brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9 %) but not pattern (correlation coefficient of 0.94-0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91 % on average, with a typical range of 0-6 %). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future.
Zhao, Wei; Ji, Songbai
2016-01-01
Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles, and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction, and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29–17.89% in the whole-brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9%) but not pattern (correlation coefficient of 0.94–0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91% on average, with a typical range of 0–6%). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future. PMID:27644441
NASA Astrophysics Data System (ADS)
Branger, Flora; Dramais, Guillaume; Horner, Ivan; Le Boursicaud, Raphaël; Le Coz, Jérôme; Renard, Benjamin
2015-04-01
Continuous river discharge data are crucial for the study and management of floods. In most river discharge monitoring networks, these data are obtained at gauging stations, where the stage-discharge relation is modelled with a rating curve to derive discharge from the measurement of water level in the river. Rating curves are usually established using individual ratings (or gaugings). However, using traditional gauging methods during flash floods is challenging for many reasons including hazardous flow conditions (for both equipment and people), short duration of the flood events, transient flows during the time needed to perform the gauging, etc. The lack of gaugings implies that the rating curve is often extrapolated well beyond the gauged range for the highest floods, inducing large uncertainties in the computed discharges. We deployed two remote techniques for gauging floods and improving stage-discharge relations for high flow conditions at several hydrometric stations throughout the Ardèche river catchment in France : (1) permanent video-recording stations enabling the implementation of the image analysis LS-PIV technique (Large Scale Particle Image Velocimetry) ; (2) and mobile gaugings using handheld Surface Velocity Radars (SVR). These gaugings were used to estimate the rating curve and its uncertainty using the Bayesian method BaRatin (Le Coz et al., 2014). Importantly, this method explicitly accounts for the uncertainty of individual gaugings, which is especially relevant for remote gaugings since their uncertainty is generally much higher than that of standard intrusive gauging methods. Then, the uncertainty of streamflow records was derived by combining the uncertainty of the rating curve and the uncertainty of stage records. We assessed the impact of these methodological developments for peak flow estimation and for flood descriptors at various time steps. The combination of field measurement innovation and statistical developments allows efficiently quantifying and reducing the uncertainties of flood peak estimates and flood descriptors at gauging stations. The noncontact streamgauging techniques used in our field campaign strategy have complementary interests. Permanent LSPIV stations, once installed and calibrated, can monitor floods automatically and perform many gaugings during a single event, thus documenting the rise, peak and recession of floods. SVR gaugings are more "one shot" gaugings but can be deployed quickly and at minimal cost over a large territory. Both of these noncontact techniques contribute to a significant reduction of uncertainty on peak hydrographs and flood descriptors at different time steps for a given catchment. Le Coz, J.; Renard, B.; Bonnifait, L.; Branger, F. & Le Boursicaud, R. (2014), 'Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: A Bayesian approach', Journal of Hydrology 509, 573-587.
Shock Layer Radiation Modeling and Uncertainty for Mars Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth
2012-01-01
A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the radiative heating for the Mars Pathfinder probe is predicted to be nearly 20 W/cm2. In contrast to previous studies, this value is shown to be significant relative to the convective heating.
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
NASA Astrophysics Data System (ADS)
Zhang, H. F.; Chen, B. Z.; Machida, T.; Matsueda, H.; Sawa, Y.; Fukuyama, Y.; Langenfelds, R.; van der Schoot, M.; Xu, G.; Yan, J. W.; Cheng, M. L.; Zhou, L. X.; Tans, P. P.; Peters, W.
2014-06-01
Current estimates of the terrestrial carbon fluxes in Asia show large uncertainties particularly in the boreal and mid-latitudes and in China. In this paper, we present an updated carbon flux estimate for Asia ("Asia" refers to lands as far west as the Urals and is divided into boreal Eurasia, temperate Eurasia and tropical Asia based on TransCom regions) by introducing aircraft CO2 measurements from the CONTRAIL (Comprehensive Observation Network for Trace gases by Airline) program into an inversion modeling system based on the CarbonTracker framework. We estimated the averaged annual total Asian terrestrial land CO2 sink was about -1.56 Pg C yr-1 over the period 2006-2010, which offsets about one-third of the fossil fuel emission from Asia (+4.15 Pg C yr-1). The uncertainty of the terrestrial uptake estimate was derived from a set of sensitivity tests and ranged from -1.07 to -1.80 Pg C yr-1, comparable to the formal Gaussian error of ±1.18 Pg C yr-1 (1-sigma). The largest sink was found in forests, predominantly in coniferous forests (-0.64 ± 0.70 Pg C yr-1) and mixed forests (-0.14 ± 0.27 Pg C yr-1); and the second and third large carbon sinks were found in grass/shrub lands and croplands, accounting for -0.44 ± 0.48 Pg C yr-1 and -0.20 ± 0.48 Pg C yr-1, respectively. The carbon fluxes per ecosystem type have large a priori Gaussian uncertainties, and the reduction of uncertainty based on assimilation of sparse observations over Asia is modest (8.7-25.5%) for most individual ecosystems. The ecosystem flux adjustments follow the detailed a priori spatial patterns by design, which further increases the reliance on the a priori biosphere exchange model. The peak-to-peak amplitude of inter-annual variability (IAV) was 0.57 Pg C yr-1 ranging from -1.71 Pg C yr-1 to -2.28 Pg C yr-1. The IAV analysis reveals that the Asian CO2 sink was sensitive to climate variations, with the lowest uptake in 2010 concurrent with a summer flood and autumn drought and the largest CO2 sink in 2009 owing to favorable temperature and plentiful precipitation conditions. We also found the inclusion of the CONTRAIL data in the inversion modeling system reduced the uncertainty by 11% over the whole Asian region, with a large reduction in the southeast of boreal Eurasia, southeast of temperate Eurasia and most tropical Asian areas.
Medieval warming initiated exceptionally large wildfire outbreaks in the Rocky Mountains
Calder, W. John; Parker, Dusty; Stopka, Cody J.; Jiménez-Moreno, Gonzalo; Shuman, Bryan N.
2015-01-01
Many of the largest wildfires in US history burned in recent decades, and climate change explains much of the increase in area burned. The frequency of extreme wildfire weather will increase with continued warming, but many uncertainties still exist about future fire regimes, including how the risk of large fires will persist as vegetation changes. Past fire-climate relationships provide an opportunity to constrain the related uncertainties, and reveal widespread burning across large regions of western North America during past warm intervals. Whether such episodes also burned large portions of individual landscapes has been difficult to determine, however, because uncertainties with the ages of past fires and limited spatial resolution often prohibit specific estimates of past area burned. Accounting for these challenges in a subalpine landscape in Colorado, we estimated century-scale fire synchroneity across 12 lake-sediment charcoal records spanning the past 2,000 y. The percentage of sites burned only deviated from the historic range of variability during the Medieval Climate Anomaly (MCA) between 1,200 and 850 y B.P., when temperatures were similar to recent decades. Between 1,130 and 1,030 y B.P., 83% (median estimate) of our sites burned when temperatures increased ∼0.5 °C relative to the preceding centuries. Lake-based fire rotation during the MCA decreased to an estimated 120 y, representing a 260% higher rate of burning than during the period of dendroecological sampling (360 to −60 y B.P.). Increased burning, however, did not persist throughout the MCA. Burning declined abruptly before temperatures cooled, indicating possible fuel limitations to continued burning. PMID:26438834
Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea
2015-01-01
Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less
NASA Astrophysics Data System (ADS)
Sherman, James P.; McComiskey, Allison
2018-03-01
Aerosol optical properties measured at Appalachian State University's co-located NASA AERONET and NOAA ESRL aerosol network monitoring sites over a nearly four-year period (June 2012-Feb 2016) are used, along with satellite-based surface reflectance measurements, to study the seasonal variability of diurnally averaged clear sky aerosol direct radiative effect (DRE) and radiative efficiency (RE) at the top-of-atmosphere (TOA) and at the surface. Aerosol chemistry and loading at the Appalachian State site are likely representative of the background southeast US (SE US), home to high summertime aerosol loading and one of only a few regions not to have warmed during the 20th century. This study is the first multi-year ground truth
DRE study in the SE US, using aerosol network data products that are often used to validate satellite-based aerosol retrievals. The study is also the first in the SE US to quantify DRE uncertainties and sensitivities to aerosol optical properties and surface reflectance, including their seasonal dependence.Median DRE for the study period is -2.9 W m-2 at the TOA and -6.1 W m-2 at the surface. Monthly median and monthly mean DRE at the TOA (surface) are -1 to -2 W m-2 (-2 to -3 W m-2) during winter months and -5 to -6 W m-2 (-10 W m-2) during summer months. The DRE cycles follow the annual cycle of aerosol optical depth (AOD), which is 9 to 10 times larger in summer than in winter. Aerosol RE is anti-correlated with DRE, with winter values 1.5 to 2 times more negative than summer values. Due to the large seasonal dependence of aerosol DRE and RE, we quantify the sensitivity of DRE to aerosol optical properties and surface reflectance, using a calendar day representative of each season (21 December for winter; 21 March for spring, 21 June for summer, and 21 September for fall). We use these sensitivities along with measurement uncertainties of aerosol optical properties and surface reflectance to calculate DRE uncertainties. We also estimate uncertainty in calculated diurnally-averaged DRE due to diurnal aerosol variability. Aerosol DRE at both the TOA and surface is most sensitive to changes in AOD, followed by single-scattering albedo (ω0). One exception is under the high summertime aerosol loading conditions (AOD ≥ 0.15 at 550 nm), when sensitivity of TOA DRE to ω0 is comparable to that of AOD. Aerosol DRE is less sensitive to changes in scattering asymmetry parameter (g) and surface reflectance (R). While DRE sensitivity to AOD varies by only ˜ 25 to 30 % with season, DRE sensitivity to ω0, g, and R largely follow the annual AOD cycle at APP, varying by factors of 8 to 15 with season. Since the measurement uncertainties of AOD, ω0, g, and R are comparable at Appalachian State, their relative contributions to DRE uncertainty are largely influenced by their (seasonally dependent) DRE sensitivity values, which suggests that the seasonal dependence of DRE uncertainty must be accounted for. Clear sky aerosol DRE uncertainty at the TOA (surface) due to measurement uncertainties ranges from 0.45 (0.75 W m-2) for December to 1.1 (1.6 W m-2) for June. Expressed as a fraction of DRE computed using monthly median aerosol optical properties and surface reflectance, the DRE uncertainties at TOA (surface) are 20 to 24 % (15 to 22 %) for March, June, and September and 49 (50 %) for DEC. The relatively low DRE uncertainties are largely due to the low uncertainty in AOD measured by AERONET. Use of satellite-based AOD measurements by MODIS in the DRE calculations increases DRE uncertainties by a factor of 2 to 5 and DRE uncertainties are dominated by AOD uncertainty for all seasons. Diurnal variability in AOD (and to a lesser extent g) contributes to uncertainties in DRE calculated using daily-averaged aerosol optical properties that are slightly larger (by ˜ 20 to 30 %) than DRE uncertainties due to measurement uncertainties during summer and fall, with comparable uncertainties during winter and spring.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Precision and accuracy of decay constants and age standards
NASA Astrophysics Data System (ADS)
Villa, I. M.
2011-12-01
40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719
Nanothermometer Based on Resonant Tunneling Diodes: From Cryogenic to Room Temperatures.
Pfenning, Andreas; Hartmann, Fabian; Rebello Sousa Dias, Mariama; Castelano, Leonardo Kleber; Süßmeier, Christoph; Langer, Fabian; Höfling, Sven; Kamp, Martin; Marques, Gilmar Eugenio; Worschech, Lukas; Lopez-Richard, Victor
2015-06-23
Sensor miniaturization together with broadening temperature sensing range are fundamental challenges in nanothermometry. By exploiting a large temperature-dependent screening effect observed in a resonant tunneling diode in sequence with a GaInNAs/GaAs quantum well, we present a low dimensional, wide range, and high sensitive nanothermometer. This sensor shows a large threshold voltage shift of the bistable switching of more than 4.5 V for a temperature raise from 4.5 to 295 K, with a linear voltage-temperature response of 19.2 mV K(-1), and a temperature uncertainty in the millikelvin (mK) range. Also, when we monitor the electroluminescence emission spectrum, an optical read-out control of the thermometer is provided. The combination of electrical and optical read-outs together with the sensor architecture excel the device as a thermometer with the capability of noninvasive temperature sensing, high local resolution, and sensitivity.
Uncertainty in aerosol hygroscopicity resulting from semi-volatile organic compounds
NASA Astrophysics Data System (ADS)
Goulden, Olivia; Crooks, Matthew; Connolly, Paul
2018-01-01
We present a novel method of exploring the effect of uncertainties in aerosol properties on cloud droplet number using existing cloud droplet activation parameterisations. Aerosol properties of a single involatile particle mode are randomly sampled within an uncertainty range and resulting maximum supersaturations and critical diameters calculated using the cloud droplet activation scheme. Hygroscopicity parameters are subsequently derived and the values of the mean and uncertainty are found to be comparable to experimental observations. A recently proposed cloud droplet activation scheme that includes the effects of co-condensation of semi-volatile organic compounds (SVOCs) onto a single lognormal mode of involatile particles is also considered. In addition to the uncertainties associated with the involatile particles, concentrations, volatility distributions and chemical composition of the SVOCs are randomly sampled and hygroscopicity parameters are derived using the cloud droplet activation scheme. The inclusion of SVOCs is found to have a significant effect on the hygroscopicity and contributes a large uncertainty. For non-volatile particles that are effective cloud condensation nuclei, the co-condensation of SVOCs reduces their actual hygroscopicity by approximately 25 %. A new concept of an effective hygroscopicity parameter is introduced that can computationally efficiently simulate the effect of SVOCs on cloud droplet number concentration without direct modelling of the organic compounds. These effective hygroscopicities can be as much as a factor of 2 higher than those of the non-volatile particles onto which the volatile organic compounds condense.
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters. PMID:24204873
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters.
Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics
NASA Technical Reports Server (NTRS)
Kawa, S. Randolph; Stolarksi, Richard S.; Douglass, Anne R.; Newman, Paul A.
2008-01-01
Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspects of our understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to simulate these processes in numerical models of chemistry and transport. The fidelity of the models is assessed in comparison with a wide range of observations. These models depend on laboratory-measured kinetic reaction rates and photolysis cross sections to simulate molecular interactions. A typical stratospheric chemistry mechanism has on the order of 50- 100 species undergoing over a hundred intermolecular reactions and several tens of photolysis reactions. The rates of all of these reactions are subject to uncertainty, some substantial. Given the complexity of the models, however, it is difficult to quantify uncertainties in many aspects of system. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluations are applied in random combinations. We determine the key reactions and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.
Melnychuk, O.; Grassellino, A.; Romanenko, A.
2014-12-19
In this paper, we discuss error analysis for intrinsic quality factor (Q₀) and accelerating gradient (E acc ) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27]. Applying this approach to cavity data collected at Vertical Test Stand facility atmore » Fermilab, we estimated total uncertainty for both Q₀ and E acc to be at the level of approximately 4% for input coupler coupling parameter β₁ in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q₀ uncertainty increases (decreases) with β₁ whereas E acc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27], is independent of β₁. Overall, our estimated Q₀ uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27].« less
Uncertainties in estimates of the risks of late effects from space radiation
NASA Astrophysics Data System (ADS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.
2004-01-01
Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits.
Uncertainty in projected climate change arising from uncertain fossil-fuel emission factors
NASA Astrophysics Data System (ADS)
Quilcaille, Y.; Gasser, T.; Ciais, P.; Lecocq, F.; Janssens-Maenhout, G.; Mohr, S.
2018-04-01
Emission inventories are widely used by the climate community, but their uncertainties are rarely accounted for. In this study, we evaluate the uncertainty in projected climate change induced by uncertainties in fossil-fuel emissions, accounting for non-CO2 species co-emitted with the combustion of fossil-fuels and their use in industrial processes. Using consistent historical reconstructions and three contrasted future projections of fossil-fuel extraction from Mohr et al we calculate CO2 emissions and their uncertainties stemming from estimates of fuel carbon content, net calorific value and oxidation fraction. Our historical reconstructions of fossil-fuel CO2 emissions are consistent with other inventories in terms of average and range. The uncertainties sum up to a ±15% relative uncertainty in cumulative CO2 emissions by 2300. Uncertainties in the emissions of non-CO2 species associated with the use of fossil fuels are estimated using co-emission ratios varying with time. Using these inputs, we use the compact Earth system model OSCAR v2.2 and a Monte Carlo setup, in order to attribute the uncertainty in projected global surface temperature change (ΔT) to three sources of uncertainty, namely on the Earth system’s response, on fossil-fuel CO2 emission and on non-CO2 co-emissions. Under the three future fuel extraction scenarios, we simulate the median ΔT to be 1.9, 2.7 or 4.0 °C in 2300, with an associated 90% confidence interval of about 65%, 52% and 42%. We show that virtually all of the total uncertainty is attributable to the uncertainty in the future Earth system’s response to the anthropogenic perturbation. We conclude that the uncertainty in emission estimates can be neglected for global temperature projections in the face of the large uncertainty in the Earth system response to the forcing of emissions. We show that this result does not hold for all variables of the climate system, such as the atmospheric partial pressure of CO2 and the radiative forcing of tropospheric ozone, that have an emissions-induced uncertainty representing more than 40% of the uncertainty in the Earth system’s response.
Wang, Yi-Ya; Zhan, Xiu-Chun
2014-04-01
Evaluating uncertainty of analytical results with 165 geological samples by polarized dispersive X-ray fluorescence spectrometry (P-EDXRF) has been reported according to the internationally accepted guidelines. One hundred sixty five pressed pellets of similar matrix geological samples with reliable values were analyzed by P-EDXRF. These samples were divided into several different concentration sections in the concentration ranges of every component. The relative uncertainties caused by precision and accuracy of 27 components were evaluated respectively. For one element in one concentration, the relative uncertainty caused by precision can be calculated according to the average value of relative standard deviation with different concentration level in one concentration section, n = 6 stands for the 6 results of one concentration level. The relative uncertainty caused by accuracy in one concentration section can be evaluated by the relative standard deviation of relative deviation with different concentration level in one concentration section. According to the error propagation theory, combining the precision uncertainty and the accuracy uncertainty into a global uncertainty, this global uncertainty acted as method uncertainty. This model of evaluating uncertainty can solve a series of difficult questions in the process of evaluating uncertainty, such as uncertainties caused by complex matrix of geological samples, calibration procedure, standard samples, unknown samples, matrix correction, overlap correction, sample preparation, instrument condition and mathematics model. The uncertainty of analytical results in this method can act as the uncertainty of the results of the similar matrix unknown sample in one concentration section. This evaluation model is a basic statistical method owning the practical application value, which can provide a strong base for the building of model of the following uncertainty evaluation function. However, this model used a lot of samples which cannot simply be applied to other types of samples with different matrix samples. The number of samples is too large to adapt to other type's samples. We will strive for using this study as a basis to establish a reasonable basis of mathematical statistics function mode to be applied to different types of samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James Thomas
2004-04-01
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less
Adams, Vanessa M.; Segan, Daniel B.; Pressey, Robert L.
2011-01-01
Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach - one that safeguards against large cost increases while allowing for potential financial windfalls - would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability. PMID:21980459
Adams, Vanessa M; Segan, Daniel B; Pressey, Robert L
2011-01-01
Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach--one that safeguards against large cost increases while allowing for potential financial windfalls--would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability.
An Oil-Bath-Based 293 K to 473 K Blackbody Source
Fowler, Joel B.
1996-01-01
A high temperature oil-bath-based-black-body source has been designed and constructed in the Radiometric Physics Division at the National Institute of Standards and Technology, Gaithersburg, MD. The goal of this work was to design a large aperture blackbody source with highly uniform radiance across the aperture, good temporal stability, and good reproducibility. This blackbody source operates in the 293 K to 473 K range with blackbody temperature combined standard uncertainties of 7.2 mK to 30.9 mK. The calculated emissivity of this source is 0.9997 with a standard uncertainty of 0.0003. With a 50 mm limiting aperture at the cavity entrance, the emissivity increases to 0.99996. PMID:27805082
A Third Generation Water Bath Based Blackbody Source
Fowler, Joel B.
1995-01-01
A third generation water bath based black-body source has been designed and constructed in the Radiometric Physics Division at the National Institute of Standards and Technology, Gaithersburg, MD. The goal of this work was to design a large aperture blackbody source with improved temporal stability and reproducibility compared with earlier designs, as well as improved ease of use. These blackbody sources operate in the 278 K to 353 K range with water temperature combined standard uncertainties of 3.5 mK to 7.8 mK. The calculated emissivity of these sources is 0.9997 with a relative standard uncertainty of 0.0003. With a 50 mm limiting aperture at the cavity; entrance, the emissivity increases to 0.99997. PMID:29151763
Decorrelated jet substructure tagging using adversarial neural networks
NASA Astrophysics Data System (ADS)
Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas
2017-10-01
We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.
2016-01-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
A model for the kinetics of a solar-pumped long path laser experiment
NASA Technical Reports Server (NTRS)
Stock, L. V.; Wilson, J. W.; Deyoung, R. J.
1986-01-01
A kinetic model for a solar-simulator pumped iodine laser system is developed and compared to an experiment in which the solar simulator output is dispersed over a large active volume (150 cu cm) with low simulator light intensity (approx. 200 solar constants). A trace foreign gas which quenches the upper level is introduced into the model. Furthermore, a constant representing optical absorption of the stimulated emission is introduced, in addition to a constant representing the scattering at each of the mirrors, via the optical cavity time constant. The non-uniform heating of the gas is treated as well as the pressure change as a function of time within the cavity. With these new phenomena introduced into the kinetic model, a best reasonable fit to the experimental data is found by adjusting the reaction rate coefficients within the range of known uncertainty by numerical methods giving a new bound within this range of uncertainty. The experimental parameters modeled are the lasing time, laser pulse energy, and time to laser threshold.
NASA Astrophysics Data System (ADS)
Xu, Ruirui; Ma, Zhongyu; Muether, Herbert; van Dalen, E. N. E.; Liu, Tinjin; Zhang, Yue; Zhang, Zhi; Tian, Yuan
2017-09-01
A relativistic microscopic optical model potential, named CTOM, for nucleon-nucleus scattering is investigated in the framework of Dirac-Brueckner-Hartree-Fock approach. The microscopic feature of CTOM is guaranteed through rigorously adopting the isospin dependent DBHF calculation within the subtracted T matrix scheme. In order to verify its prediction power, a global study n, p+ A scattering are carried out. The predicted scattering observables coincide with experimental data within a good accuracy over a broad range of targets and a large region of energies only with two free items, namely the free-range factor t in the applied improved local density approximation and minor adjustments of the scalar and vector potentials in the low-density region. In addition, to estimate the uncertainty of the theoretical results, the deterministic simple least square approach is preliminarily employed to derive the covariance of predicted angular distributions, which is also briefly contained in this paper.
Life cycle analysis of fuel production from fast pyrolysis of biomass.
Han, Jeongwoo; Elgowainy, Amgad; Dunn, Jennifer B; Wang, Michael Q
2013-04-01
A well-to-wheels (WTW) analysis of pyrolysis-based gasoline was conducted and compared with petroleum gasoline. To address the variation and uncertainty in the pyrolysis pathways, probability distributions for key parameters were developed with data from literature. The impacts of two different hydrogen sources for pyrolysis oil upgrading and of two bio-char co-product applications were investigated. Reforming fuel gas/natural gas for H2 reduces WTW GHG emissions by 60% (range of 55-64%) compared to the mean of petroleum fuels. Reforming pyrolysis oil for H2 increases the WTW GHG emissions reduction up to 112% (range of 97-126%), but reduces petroleum savings per unit of biomass used due to the dramatic decline in the liquid fuel yield. Thus, the hydrogen source causes a trade-off between GHG reduction per unit fuel output and petroleum displacement per unit biomass used. Soil application of biochar could provide significant carbon sequestration with large uncertainty. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Impact of spatially correlated pore-scale heterogeneity on drying porous media
NASA Astrophysics Data System (ADS)
Borgman, Oshri; Fantinel, Paolo; Lühder, Wieland; Goehring, Lucas; Holtzman, Ran
2017-07-01
We study the effect of spatially-correlated heterogeneity on isothermal drying of porous media. We combine a minimal pore-scale model with microfluidic experiments with the same pore geometry. Our simulated drying behavior compares favorably with experiments, considering the large sensitivity of the emergent behavior to the uncertainty associated with even small manufacturing errors. We show that increasing the correlation length in particle sizes promotes preferential drying of clusters of large pores, prolonging liquid connectivity and surface wetness and thus higher drying rates for longer periods. Our findings improve our quantitative understanding of how pore-scale heterogeneity impacts drying, which plays a role in a wide range of processes ranging from fuel cells to curing of paints and cements to global budgets of energy, water and solutes in soils.
Critical discussion on the "observed" water balances of five sub-basins in the Everest region
NASA Astrophysics Data System (ADS)
Chevallier, P.; Eeckman, J.; Nepal, S.; Delclaux, F.; Wagnon, P.; Brun, F.; Koirala, D.
2017-12-01
The hydrometeorological components of five Dudh Koshi River sub-basins on the Nepalese side of the Mount Everest have been monitored during four hydrological years (2013-2017), with altitudes ranging from 2000 m to Everest top, areas between 4.65 and 1207 km², and proportions of glaciated areas between nil and 45%. This data set is completed with glacier mass balance observations. The analysis of the observed data and the resulting water balances show large uncertainties of different types: aleatory, epistemic or semantic, following the classification proposed by Beven (2016). The discussion is illustrated using results from two modeling approaches, physical (ISBA, Noilhan and Planton, 1996) and conceptual (J2000, Krause, 2001), as well as large scale glacier mass balances obtained by the way of a recent remote sensing processing method. References: Beven, K., 2016. Facets of uncertainty: epistemic uncertainty, non-stationarity, likelihood, hypothesis testing, and communication. Hydrological Sciences Journal 61, 1652-1665. doi:10.1080/02626667.2015.1031761 Krause, P., 2001. Das hydrologische Modellsystem J2000: Beschreibung und Anwendung in groen Flueinzugsgebieten, Schriften des Forschungszentrum Jülich. Reihe Umwelt/Environment; Band 29. Noilhan, J., Planton, S., 1989. A single parametrization of land surface processes for meteorological models. Monthly Weather Review 536-549.
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Wegge, Robin; McLinden, Mark O; Perkins, Richard A; Richter, Markus; Span, Roland
2016-08-01
The speed of sound of two (argon + carbon dioxide) mixtures was measured over the temperature range from (275 to 500) K with pressures up to 8 MPa utilizing a spherical acoustic resonator. The compositions of the gravimetrically prepared mixtures were (0.50104 and 0.74981) mole fraction carbon dioxide. The vibrational relaxation of pure carbon dioxide led to high sound absorption, which significantly impeded the sound-speed measurements on carbon dioxide and its mixtures; pre-condensation may have also affected the results for some measurements near the dew line. Thus, in contrast to the standard operating procedure for speed-of-sound measurements with a spherical resonator, non-radial resonances at lower frequencies were taken into account. Still, the data show a comparatively large scatter, and the usual repeatability of this general type of instrument could not be realized with the present measurements. Nonetheless, the average relative combined expanded uncertainty ( k = 2) in speed of sound ranged from (0.042 to 0.056)% for both mixtures, with individual state-point uncertainties increasing to 0.1%. These uncertainties are adequate for our intended purpose of evaluating thermodynamic models. The results are compared to a Helmholtz energy equation of state for carbon capture and storage applications; relative deviations of (-0.64 to 0.08)% for the (0.49896 argon + 0.50104 carbon dioxide) mixture, and of (-1.52 to 0.77)% for the (0.25019 argon + 0.74981 carbon dioxide) mixture were observed.
Sapphire Whispering Gallery Thermometer
NASA Astrophysics Data System (ADS)
Strouse, G. F.
2007-12-01
An innovative sapphire whispering gallery thermometer (SWGT) is being explored at the National Institute of Standards and Technology (NIST) as a potential replacement for a standard platinum resistance thermometer (SPRT) for industrial applications that require measurement uncertainties of ≤ 10 mK. The NIST SWGT uses a synthetic sapphire monocrystalline disk configured as a uniaxial, dielectric resonator with whispering gallery modes between 14 GHz and 20 GHz and with Q-factors as large as 90,000. The prototype SWGT stability at the ice melting point (0°C) is ≤ 1 mK with a frequency resolution equivalent to 0.05 mK. The prototype SWGT measurement uncertainty ( k= 1) is 10 mK from 0°C to 100°C for all five resonance modes studied. These results for the SWGT approach the capabilities of industrial resistance thermometers. The SWGT promises greatly increased resistance to mechanical shock relative to SPRTs, over the range from -196°C to 500°C while retaining the low uncertainties needed by secondary calibration laboratories. The temperature sensitivity of the SWGT depends upon a well-defined property (the refractive index at microwave frequencies) and the thermal expansion of a pure material. Therefore, it is expected that SWGTs can be calibrated over a wide temperature range using a reference function, along with deviations measured at a few fixed points. This article reports the prototype SWGT stability, resolution, repeatability, and the temperature dependence of five whispering gallery resonance frequencies in the range from 0°C to 100°C.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.
2010-11-01
The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~12%, -10%~36%, -10%~36%, -12%~42% -16%~52%, -23%~130%, and -37%~117%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission studies, the results are not always consistent with those derived from satellite observations. The results thus represent an incremental research advance; while the analysis provides current estimates of uncertainty to researchers investigating Chinese and global atmospheric transport and chemistry, it also identifies specific needs in data collection and analysis to improve on them. Strengthened quantification of emissions of the included species and other, closely associated ones - notably CO2, generated largely by the same processes and thus subject to many of the same parameter uncertainties - is essential not only for science but for the design of policies to redress critical atmospheric environmental hazards at local, regional, and global scales.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.
2011-03-01
The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~13%, -13%~37%, -11%~38%, -14%~45%, -17%~54%, -25%~136%, and -40%~121%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission studies, the results are not always consistent with those derived from satellite observations. The results thus represent an incremental research advance; while the analysis provides current estimates of uncertainty to researchers investigating Chinese and global atmospheric transport and chemistry, it also identifies specific needs in data collection and analysis to improve on them. Strengthened quantification of emissions of the included species and other, closely associated ones - notably CO2, generated largely by the same processes and thus subject to many of the same parameter uncertainties - is essential not only for science but for the design of policies to redress critical atmospheric environmental hazards at local, regional, and global scales.
Optimization of Geothermal Well Placement under Geological Uncertainty
NASA Astrophysics Data System (ADS)
Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian
2017-04-01
Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.
Uncertainty in accounting for carbon accumulation following forest harvesting
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Arthur, M. A.; Bae, K.; Hamburg, S.; Levine, C. R.; Vadeboncoeur, M. A.
2014-12-01
Tree biomass and forest soils are both difficult to quantify with confidence, for different reasons. Forest biomass is estimated non-destructively using allometric equations, often from other sites; these equations are difficult to validate. Forest soils are destructively sampled, resulting in little measurement error at a point, but with large sampling error in heterogeneous soil environments, such as in soils developed on glacial till. In this study, we report C contents of biomass and soil pools in northern hardwood stands in replicate plots within replicate stands in 3 age classes following clearcut harvesting (14-19 yr, 26-29 yr, and > 100 yr) at the Bartlett Experimental Forest, USA. The rate of C accumulation in aboveground biomass was ~3 Mg/ha/yr between the young and mid-aged stands and <1 Mg/ha/yr between the mid-aged and mature stands. We propagated model uncertainty through allometric equations, and found errors ranging from 3-7%, depending on the stand. The variation in biomass among plots within stands (6-19%) was always larger than the allometric uncertainties. Soils were described by quantitative soil pits in three plots per stand, excavated by depth increment to the C horizon. Variation in soil mass among pits within stands averaged 28% (coefficient of variation); variation among stands within an age class ranged from 9-25%. Variation in carbon concentrations averaged 27%, mainly because the depth increments contained varying proportions of genetic horizons, in the upper part of the soil profile. Differences across age classes in soil C were not significant, because of the high variability. Uncertainty analysis can help direct the design of monitoring schemes to achieve the greatest confidence in C stores per unit of sampling effort. In the system we studied, more extensive sampling would be the best approach to reducing uncertainty, as natural spatial variation was higher than model or measurement uncertainties.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Computational nuclear quantum many-body problem: The UNEDF project
NASA Astrophysics Data System (ADS)
Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.
2013-10-01
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.
NASA Technical Reports Server (NTRS)
Liu, Jianbo; Kummerow, Christian D.; Elsaesser, Gregory S.
2016-01-01
Despite continuous improvements in microwave sensors and retrieval algorithms, our understanding of precipitation uncertainty is quite limited, due primarily to inconsistent findings in studies that compare satellite estimates to in situ observations over different parts of the world. This study seeks to characterize the temporal and spatial properties of uncertainty in the Tropical Rainfall Measuring Mission Microwave Imager surface rainfall product over tropical ocean basins. Two uncertainty analysis frameworks are introduced to qualitatively evaluate the properties of uncertainty under a hierarchy of spatiotemporal data resolutions. The first framework (i.e. 'climate method') demonstrates that, apart from random errors and regionally dependent biases, a large component of the overall precipitation uncertainty is manifested in cyclical patterns that are closely related to large-scale atmospheric modes of variability. By estimating the magnitudes of major uncertainty sources independently, the climate method is able to explain 45-88% of the monthly uncertainty variability. The percentage is largely resolution dependent (with the lowest percentage explained associated with a 1 deg x 1 deg spatial/1 month temporal resolution, and highest associated with a 3 deg x 3 deg spatial/3 month temporal resolution). The second framework (i.e. 'weather method') explains regional mean precipitation uncertainty as a summation of uncertainties associated with individual precipitation systems. By further assuming that self-similar recurring precipitation systems yield qualitatively comparable precipitation uncertainties, the weather method can consistently resolve about 50 % of the daily uncertainty variability, with only limited dependence on the regions of interest.
Estimation of Pre-industrial Nitrous Oxide Emission from the Terrestrial Biosphere
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Lu, C.; Zhang, B.; Pan, S.; Yang, J.
2015-12-01
Nitrous oxide (N2O) is currently the third most important greenhouse gases (GHG) after methane (CH4) and carbon dioxide (CO2). Global N2O emission increased substantially primarily due to reactive nitrogen (N) enrichment through fossil fuel combustion, fertilizer production, and legume crop cultivation etc. In order to understand how climate system is perturbed by anthropogenic N2O emissions from the terrestrial biosphere, it is necessary to better estimate the pre-industrial N2O emissions. Previous estimations of natural N2O emissions from the terrestrial biosphere range from 3.3-9.0 Tg N2O-N yr-1. This large uncertainty in the estimation of pre-industrial N2O emissions from the terrestrial biosphere may be caused by uncertainty associated with key parameters such as maximum nitrification and denitrification rates, half-saturation coefficients of soil ammonium and nitrate, N fixation rate, and maximum N uptake rate. In addition to the large estimation range, previous studies did not provide an estimate on preindustrial N2O emissions at regional and biome levels. In this study, we applied a process-based coupled biogeochemical model to estimate the magnitude and spatial patterns of pre-industrial N2O fluxes at biome and continental scales as driven by multiple input data, including pre-industrial climate data, atmospheric CO2 concentration, N deposition, N fixation, and land cover types and distributions. Uncertainty associated with key parameters is also evaluated. Finally, we generate sector-based estimates of pre-industrial N2O emission, which provides a reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere.
Climate data induced uncertainty in model-based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko
2017-06-01
Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due to climate data range and less so due to the apparent model sensitivity. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than apparent model sensitivity to forcing. Our study highlights the need to better constrain tropical climate, and demonstrates that uncertainty caused by climatic forcing data must be considered when comparing and evaluating carbon cycle model results and empirical datasets.
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.
Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-08-16
Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.
Uncertainty quantification in fission cross section measurements at LANSCE
Tovesson, F.
2015-01-09
Neutron-induced fission cross sections have been measured for several isotopes of uranium and plutonium at the Los Alamos Neutron Science Center (LANSCE) over a wide range of incident neutron energies. The total uncertainties in these measurements are in the range 3–5% above 100 keV of incident neutron energy, which results from uncertainties in the target, neutron source, and detector system. The individual sources of uncertainties are assumed to be uncorrelated, however correlation in the cross section across neutron energy bins are considered. The quantification of the uncertainty contributions will be described here.
Temporal variability in stage-discharge relationships
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Westerberg, Ida K.; Halldin, Sven; Xu, Chong-Yu; Lundin, Lars-Christer
2012-06-01
SummaryAlthough discharge estimations are central for water management and hydropower, there are few studies on the variability and uncertainty of their basis; deriving discharge from stage heights through the use of a rating curve that depends on riverbed geometry. A large fraction of the world's river-discharge stations are presumably located in alluvial channels where riverbed characteristics may change over time because of erosion and sedimentation. This study was conducted to analyse and quantify the dynamic relationship between stage and discharge and to determine to what degree currently used methods are able to account for such variability. The study was carried out for six hydrometric stations in the upper Choluteca River basin, Honduras, where a set of unusually frequent stage-discharge data are available. The temporal variability and the uncertainty of the rating curve and its parameters were analysed through a Monte Carlo (MC) analysis on a moving window of data using the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. Acceptable ranges for the values of the rating-curve parameters were determined from riverbed surveys at the six stations, and the sampling space was constrained according to those ranges, using three-dimensional alpha shapes. Temporal variability was analysed in three ways: (i) with annually updated rating curves (simulating Honduran practices), (ii) a rating curve for each time window, and (iii) a smoothed, continuous dynamic rating curve derived from the MC analysis. The temporal variability of the rating parameters translated into a high rating-curve variability. The variability could turn out as increasing or decreasing trends and/or cyclic behaviour. There was a tendency at all stations to a seasonal variability. The discharge at a given stage could vary by a factor of two or more. The quotient in discharge volumes estimated from dynamic and static rating curves varied between 0.5 and 1.5. The difference between discharge volumes derived from static and dynamic curves was largest for sub-daily ratings but stayed large also for monthly and yearly totals. The relative uncertainty was largest for low flows but it was considerable also for intermediate and large flows. The standard procedure of adjusting rating curves when calculated and observed discharge differ by more than 5% would have required continuously updated rating curves at the studied locations. We believe that these findings can be applicable to many other discharge stations around the globe.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Issues and recent advances in optimal experimental design for site investigation (Invited)
NASA Astrophysics Data System (ADS)
Nowak, W.
2013-12-01
This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction with the ongoing site investigation, such that surprises in the data are immediately accounted for to restrict the conceptual uncertainty and update the optimization of the plan.
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
Probabilistic projections of 21st century climate change over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, E.; Sokolov, A. P.; Schlosser, C. A.; Scott, J. R.; Gao, X.
2013-12-01
We present probabilistic projections of 21st century climate change over Northern Eurasia using the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an earth system model of intermediate complexity, with a two-dimensional zonal-mean atmosphere, to a human activity model. Regional climate change is obtained by two downscaling methods: a dynamical downscaling, where the IGSM is linked to a three dimensional atmospheric model; and a statistical downscaling, where a pattern scaling algorithm uses climate-change patterns from 17 climate models. This framework allows for key sources of uncertainty in future projections of regional climate change to be accounted for: emissions projections; climate system parameters (climate sensitivity, strength of aerosol forcing and ocean heat uptake rate); natural variability; and structural uncertainty. Results show that the choice of climate policy and the climate parameters are the largest drivers of uncertainty. We also nd that dierent initial conditions lead to dierences in patterns of change as large as when using different climate models. Finally, this analysis reveals the wide range of possible climate change over Northern Eurasia, emphasizing the need to consider all sources of uncertainty when modeling climate impacts over Northern Eurasia.
Probabilistic projections of 21st century climate change over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, Erwan; Sokolov, Andrei; Schlosser, Adam; Scott, Jeffery; Gao, Xiang
2013-12-01
We present probabilistic projections of 21st century climate change over Northern Eurasia using the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity with a two-dimensional zonal-mean atmosphere to a human activity model. Regional climate change is obtained by two downscaling methods: a dynamical downscaling, where the IGSM is linked to a three-dimensional atmospheric model, and a statistical downscaling, where a pattern scaling algorithm uses climate change patterns from 17 climate models. This framework allows for four major sources of uncertainty in future projections of regional climate change to be accounted for: emissions projections, climate system parameters (climate sensitivity, strength of aerosol forcing and ocean heat uptake rate), natural variability, and structural uncertainty. The results show that the choice of climate policy and the climate parameters are the largest drivers of uncertainty. We also find that different initial conditions lead to differences in patterns of change as large as when using different climate models. Finally, this analysis reveals the wide range of possible climate change over Northern Eurasia, emphasizing the need to consider these sources of uncertainty when modeling climate impacts over Northern Eurasia.
Unrealized Global Temperature Increase: Implications of Current Uncertainties
NASA Astrophysics Data System (ADS)
Schwartz, Stephen E.
2018-04-01
Unrealized increase in global mean surface air temperature (GMST) may result from the climate system not being in steady state with forcings and/or from cessation of negative aerosol forcing that would result from decreases in emissions. An observation-constrained method is applied to infer the dependence of Earth's climate sensitivity on forcing by anthropogenic aerosols within the uncertainty on that forcing given by the Fifth (2013) Assessment Report of the Intergovernmental Panel on Climate Change. Within these uncertainty ranges the increase in GMST due to temperature lag for future forcings held constant is slight (0.09-0.19 K over 20 years; 0.12-0.26 K over 100 years). However, the incremental increase in GMST that would result from a hypothetical abrupt cessation of sources of aerosols could be quite large but is highly uncertain, 0.1-1.3 K over 20 years. Decrease in CO2 abundance and forcing following abrupt cessation of emissions would offset these increases in GMST over 100 years by as little as 0.09 K to as much as 0.8 K. The uncertainties quantified here greatly limit confidence in projections of change in GMST that would result from any strategy for future reduction of emissions.
Low order models for uncertainty quantification in acoustic propagation problems
NASA Astrophysics Data System (ADS)
Millet, Christophe
2016-11-01
Long-range sound propagation problems are characterized by both a large number of length scales and a large number of normal modes. In the atmosphere, these modes are confined within waveguides causing the sound to propagate through multiple paths to the receiver. For uncertain atmospheres, the modes are described as random variables. Concise mathematical models and analysis reveal fundamental limitations in classical projection techniques due to different manifestations of the fact that modes that carry small variance can have important effects on the large variance modes. In the present study, we propose a systematic strategy for obtaining statistically accurate low order models. The normal modes are sorted in decreasing Sobol indices using asymptotic expansions, and the relevant modes are extracted using a modified iterative Krylov-based method. The statistics of acoustic signals are computed by decomposing the original pulse into a truncated sum of modal pulses that can be described by a stationary phase method. As the low-order acoustic model preserves the overall structure of waveforms under perturbations of the atmosphere, it can be applied to uncertainty quantification. The result of this study is a new algorithm which applies on the entire phase space of acoustic fields.
Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data
NASA Astrophysics Data System (ADS)
Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.
2017-12-01
Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly <0.1. Various factors can contribute to discrepancies in water storage trends between models and GRACE, including uncertainties in precipitation, model calibration, storage capacity, and water use in models and uncertainties in GRACE data related to processing, glacier leakage, and glacial isostatic adjustment. The GRACE data indicate that land has a large capacity to store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
Uncertainties in estimates of the risks of late effects from space radiation
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.
2004-01-01
Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits. Published by Elsevier Ltd on behalf of COSPAR.
Uncertainties in Estimates of the Risks of Late Effects from Space Radiation
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.
2002-01-01
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.
Uncertainties in Projecting Risks of Late Effects from Space Radiation
NASA Astrophysics Data System (ADS)
Cucinotta, F.; Schimmerling, W.; Peterson, L.; Wilson, J.; Saganti, P.; Dicello, J.
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, CNS risks, and non - cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low -Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a maximum likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of the primary factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objectives, i.e., number of days in space without exceeding a given risk level within well defined confidence limits
NASA Astrophysics Data System (ADS)
Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.
2013-12-01
This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) incorporates a human activity model, it is possible to analyze uncertainties in emissions resulting from both uncertainties in the underlying socio-economic characteristics of the economic model and in the choice of climate-related policies. Another major feature is the flexibility to vary key climate parameters controlling the climate system response to changes in greenhouse gases and aerosols concentrations, e.g., climate sensitivity, ocean heat uptake rate, and strength of the aerosol forcing. The IGSM-CAM is not only able to realistically simulate the present-day mean climate and the observed trends at the global and continental scale, but it also simulates ENSO variability with realistic time scales, seasonality and patterns of SST anomalies, albeit with stronger magnitudes than observed. The IGSM-CAM shares the same general strengths and limitations as the Coupled Model Intercomparison Project Phase 3 (CMIP3) models in simulating present-day annual mean surface temperature and precipitation. Over land, the IGSM-CAM shows similar biases to the NCAR Community Climate System Model (CCSM) version 3, which shares the same atmospheric model. This study also presents 21st century simulations based on two emissions scenarios (unconstrained scenario and stabilization scenario at 660 ppm CO2-equivalent) similar to, respectively, the Representative Concentration Pathways RCP8.5 and RCP4.5 scenarios, and three sets of climate parameters. Results of the simulations with the chosen climate parameters provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century changes in global mean surface air temperature from previous work with the IGSM. Because the IGSM-CAM framework only considers one particular climate model, it cannot be used to assess the structural modeling uncertainty arising from differences in the parameterization suites of climate models. However, comparison of the IGSM-CAM projections with simulations of 31 CMIP5 models under the RCP4.5 and RCP8.5 scenarios show that the range of warming at the continental scale shows very good agreement between the two ensemble simulations, except over Antarctica, where the IGSM-CAM overestimates the warming. This demonstrates that by sampling the climate system response, the IGSM-CAM, even though it relies on one single climate model, can essentially reproduce the range of future continental warming simulated by more than 30 different models. Precipitation changes projected in the IGSM-CAM simulations and the CMIP5 multi-model ensemble both display a large uncertainty at the continental scale. The two ensemble simulations show good agreement over Asia and Europe. However, the ranges of precipitation changes do not overlap - but display similar size - over Africa and South America, two continents where models generally show little agreement in the sign of precipitation changes and where CCSM3 tends to be an outlier. Overall, the IGSM-CAM provides an efficient and consistent framework to explore the large uncertainty in future projections of global and regional climate change associated with uncertainty in the climate response and projected emissions.
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.; ...
2017-05-31
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, N; Zhu, X; Zhang, X
Purpose: To quantify the impact of range and setup uncertainties on various dosimetric indices that are used to assess normal tissue toxicities of patients receiving passive scattering proton beam therapy (PSPBT). Methods: Robust analysis of sample treatment plans of six brain cancer patients treated with PSPBT at our facility for whom the maximum brain stem dose exceeded 5800 CcGE were performed. The DVH of each plan was calculated in an Eclipse treatment planning system (TPS) version 11 applying ±3.5% range uncertainty and ±3 mm shift of the isocenter in x, y and z directions to account for setup uncertainties. Worst-casemore » dose indices for brain stem and whole brain were compared to their values in the nominal plan to determine the average change in their values. For the brain stem, maximum dose to 1 cc of volume, dose to 10%, 50%, 90% of volume (D10, D50, D90) and volume receiving 6000, 5400, 5000, 4500, 4000 CcGE (V60, V54, V50, V45, V40) were evaluated. For the whole brain, maximum dose to 1 cc of volume, and volume receiving 5400, 5000, 4500, 4000, 3000 CcGE (V54, V50, V45, V40 and V30) were assessed. Results: The average change in the values of these indices in the worst scenario cases from the nominal plan were as follows. Brain stem; Maximum dose to 1 cc of volume: 1.1%, D10: 1.4%, D50: 8.0%, D90:73.3%, V60:116.9%, V54:27.7%, V50: 21.2%, V45:16.2%, V40:13.6%,Whole brain; Maximum dose to 1 cc of volume: 0.3%, V54:11.4%, V50: 13.0%, V45:13.6%, V40:14.1%, V30:13.5%. Conclusion: Large to modest changes in the dosiemtric indices for brain stem and whole brain compared to nominal plan due to range and set up uncertainties were observed. Such potential changes should be taken into account while using any dosimetric parameters for outcome evaluation of patients receiving proton therapy.« less
NASA Astrophysics Data System (ADS)
Spanu, Antonio; Weinzierl, Bernadett; Freudenthaler, Volker; Sauer, Daniel; Gasteiger, Josef
2016-04-01
Explosive volcanic eruptions inject large amounts of gas and particles into the atmosphere resulting in strong impacts on anthropic systems and climate. Fine ash particles in suspension, even if at low concentrations, are a serious aviation safety hazard. A key point to predict the dispersion and deposition of volcanic ash is the knowledge of emitted mass and its particle size distribution. Usually the deposit is used to characterize the source but a large uncertainty is present for fine and very fine ash particles which are usually not well preserved. Conversely, satellite observations provide only column-integrated information and are strongly sensitive to cloud conditions above the ash plumes. Consequently, in situ measurements are fundamental to extend our knowledge on ash clouds, their properties, and interactions over the vertical extent of the atmosphere. Different in-situ instruments are available covering different particle size ranges using a variety of measurement techniques. Depending on the measurement technique, artefacts due to instrument setup and ambient conditions can strongly modify the measured number concentration and size distribution of the airborne particles. It is fundamental to correct for those effects to quantify the uncertainty associated with the measurement. Here we evaluate the potential of our optical light-scattering spectrometer CAS-DPOL to detect airborne mineral dust and volcanic ash (in the size range between 0.7μm and 50μm) and to provide a reliable estimation of the mass concentration, investigating the associate uncertainty. The CAS-DPOL instrument sizes particles by detecting the light scattered off the particle into a defined angle. The associated uncertainty depends on the optical instrument design and on unknown particles characteristics such as shape and material. Indirect measurements of mass concentrations are statistically reconstructed using the air flow velocity. Therefore, the detected concentration is strongly sensitive to the sample flow and on the mechanical instrument design. Using a fluid dynamics model coupled with an optical model we analyze the effects of instrument design on the measurement, identify measurement uncertainties and recommend strategies to reduce the uncertainties. The two main results are that the optical design of the CAS-DPOL aerosol spectrometer can lead to an under-counting bias of up to 40% for larger particles and an over-counting bias of 20%-30% for smaller particles. Secondly, depending on how the instrument is mounted on the plane, the sampling can be subject to a significantly larger size selection bias than typically recognized, especially if the mounting leads to irregular sampling conditions. To correct both problems a new correction algorithm is described generalizing the results also to other optical particle counters. Finally, a comparison study is presented showing the effects on mass estimation and radiative forcing for uncorrected and corrected data also stating the resulting uncertainty.
Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations
NASA Technical Reports Server (NTRS)
Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide
2017-01-01
Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only better performances of historical simulations but also more robust and confidential future projections of hydrological changes under a changing environment.
The role of correlations in uncertainty quantification of transportation relevant fuel models
Fridlyand, Aleksandr; Johnson, Matthew S.; Goldsborough, S. Scott; ...
2017-02-03
Large reaction mechanisms are often used to describe the combustion behavior of transportation-relevant fuels like gasoline, where these are typically represented by surrogate blends, e.g., n-heptane/iso-octane/toluene. We describe efforts to quantify the uncertainty in the predictions of such mechanisms at realistic engine conditions, seeking to better understand the robustness of the model as well as the important reaction pathways and their impacts on combustion behavior. In this work, we examine the importance of taking into account correlations among reactions that utilize the same rate rules and those with multiple product channels on forward propagation of uncertainty by Monte Carlo simulations.more » Automated means are developed to generate the uncertainty factor assignment for a detailed chemical kinetic mechanism, by first uniquely identifying each reacting species, then sorting each of the reactions based on the rate rule utilized. Simulation results reveal that in the low temperature combustion regime for iso-octane, the majority of the uncertainty in the model predictions can be attributed to low temperature reactions of the fuel sub-mechanism. The foundational, or small-molecule chemistry (C 0-C 4) only contributes significantly to uncertainties in the predictions at the highest temperatures (Tc=900 K). Accounting for correlations between important reactions is shown to produce non-negligible differences in the estimates of uncertainty. Including correlations among reactions that use the same rate rules increases uncertainty in the model predictions, while accounting for correlations among reactions with multiple branches decreases uncertainty in some cases. Significant non-linear response is observed in the model predictions depending on how the probability distributions of the uncertain rate constants are defined.Finally, we concluded that care must be exercised in defining these probability distributions in order to reduce bias, and physically unrealistic estimates in the forward propagation of uncertainty for a range of UQ activities.« less
NASA Astrophysics Data System (ADS)
Westerberg, Ida
2017-04-01
Understanding and quantifying how hydrological response behaviour varies across catchments, or how catchments change with time requires reliable discharge data. For reliable estimation of spatial and temporal change, the change in the response behaviour needs to be larger than the uncertainty in the response behaviour estimates that are compared. Understanding how discharge data uncertainty varies between catchments and over time, and how these uncertainties propagate to information derived from the data, is therefore key to drawing the right conclusions in comparative analyses. Uncertainty in discharge data is often highly place-specific and reliable estimation depends on detailed analyses of the rating curve model and stage-discharge measurements used to calculate discharge time series from stage (water level) at the gauging station. This underlying information is often not available when discharge data is provided by monitoring agencies. However, even without detailed analyses, the chance that the discharge data would be uncertain at particular flow ranges can be assessed based on information about the gauging station, the flow regime, and the catchment. This type of information is often available for most catchments even if the rating curve data are not. Such 'soft information' on discharge uncertainty may aid interpretation of results from regional and temporal change analyses. In particular, it can help reduce the risk of wrongly interpreting differences in response behaviour caused by discharge uncertainty as real changes. In this presentation I draw on several previous studies to discuss some of the factors that affect discharge data uncertainty and give examples from catchments worldwide. I aim to 1) illustrate the consequences of discharge data uncertainty on comparisons of different types of hydrological response behaviour across catchments and when analysing temporal change, and 2) give practical advice as to what factors may help identify catchments with potentially large discharge uncertainty.
Urbazaev, Mikhail; Thiel, Christian; Cremer, Felix; Dubayah, Ralph; Migliavacca, Mirco; Reichstein, Markus; Schmullius, Christiane
2018-02-21
Information on the spatial distribution of aboveground biomass (AGB) over large areas is needed for understanding and managing processes involved in the carbon cycle and supporting international policies for climate change mitigation and adaption. Furthermore, these products provide important baseline data for the development of sustainable management strategies to local stakeholders. The use of remote sensing data can provide spatially explicit information of AGB from local to global scales. In this study, we mapped national Mexican forest AGB using satellite remote sensing data and a machine learning approach. We modelled AGB using two scenarios: (1) extensive national forest inventory (NFI), and (2) airborne Light Detection and Ranging (LiDAR) as reference data. Finally, we propagated uncertainties from field measurements to LiDAR-derived AGB and to the national wall-to-wall forest AGB map. The estimated AGB maps (NFI- and LiDAR-calibrated) showed similar goodness-of-fit statistics (R 2 , Root Mean Square Error (RMSE)) at three different scales compared to the independent validation data set. We observed different spatial patterns of AGB in tropical dense forests, where no or limited number of NFI data were available, with higher AGB values in the LiDAR-calibrated map. We estimated much higher uncertainties in the AGB maps based on two-stage up-scaling method (i.e., from field measurements to LiDAR and from LiDAR-based estimates to satellite imagery) compared to the traditional field to satellite up-scaling. By removing LiDAR-based AGB pixels with high uncertainties, it was possible to estimate national forest AGB with similar uncertainties as calibrated with NFI data only. Since LiDAR data can be acquired much faster and for much larger areas compared to field inventory data, LiDAR is attractive for repetitive large scale AGB mapping. In this study, we showed that two-stage up-scaling methods for AGB estimation over large areas need to be analyzed and validated with great care. The uncertainties in the LiDAR-estimated AGB propagate further in the wall-to-wall map and can be up to 150%. Thus, when a two-stage up-scaling method is applied, it is crucial to characterize the uncertainties at all stages in order to generate robust results. Considering the findings mentioned above LiDAR can be used as an extension to NFI for example for areas that are difficult or not possible to access.
Low energy scattering cross section ratios of 14N(p ,p ) 14N
NASA Astrophysics Data System (ADS)
deBoer, R. J.; Bardayan, D. W.; Görres, J.; LeBlanc, P. J.; Manukyan, K. V.; Moran, M. T.; Smith, K.; Tan, W.; Uberseder, E.; Wiescher, M.; Bertone, P. F.; Champagne, A. E.; Islam, M. S.
2015-04-01
Background: The slowest reaction in the first CNO cycle is 14N(p ,γ ) 15O , therefore its rate determines the overall energy production efficiency of the entire cycle. The cross section presents several strong resonance contributions, especially for the ground-state transition. Some of the properties of the corresponding levels in the 15O compound nucleus remain uncertain, which affects the uncertainty in extrapolating the capture cross section to the low energy range of astrophysical interest. Purpose: The 14N(p ,γ ) 15O cross section can be described by using the phenomenological R matrix. Over the energy range of interest, only the proton and γ -ray channels are open. Since resonance capture makes significant contributions to the 14N(p ,γ ) 15O cross section, resonant proton scattering data can be used to provide additional constraints on the R -matrix fit of the capture data. Methods: A 4 MV KN Van de Graaff accelerator was used to bombard protons onto a windowless gas target containing enriched 14N gas over the proton energy range from Ep=1.0 to 3.0 MeV. Scattered protons were detected at θlab=90 , 120∘, 135∘, 150∘, and 160∘ using ruggedized silicon detectors. In addition, a 10 MV FN Tandem Van de Graaff accelerator was used to accelerate protons onto a solid Adenine (C5H5N5 ) target, of natural isotopic abundance, evaporated onto a thin self-supporting carbon backing, over the energy range from Ep=1.8 to 4.0 MeV. Scattered protons were detected at 28 angles between θlab=30 .4∘ and 167 .7∘ by using silicon photodiode detectors. Results: Relative cross sections were extracted from both measurements. While the relative cross sections do not provide as much constraint as absolute measurements, they greatly reduce the dependence of the data on otherwise significant systematic uncertainties, which are more difficult to quantify. The data are fit simultaneously using an R -matrix analysis and level energies and proton widths are extracted. Even with relative measurements, the statistics and large angular coverage of the measurements result in more confident values for the energies and proton widths of several levels; in particular, the broad resonance at Ec.m.=2.21 MeV, which corresponds to the 3 /2+ level at Ex=9.51 MeV in 15O . In particular, the s - and d -wave angular-momentum channels are separated. Conclusion: The relative cross sections provide a consistent set of data that can be used to better constrain a full multichannel R -matrix extrapolation of the capture data. It has been demonstrated how the scattering data reduce the uncertainty through a preliminary Monte Carlo uncertainty analysis, but several other issues remain that make large contributions to the uncertainty, which must be addressed by further capture and lifetime measurements.
NASA Astrophysics Data System (ADS)
Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antiči'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, R.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.
2012-12-01
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 1018 eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 1018 eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 1018 eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.
Bird-landscape relations in the Chihuahuan Desert: Coping with uncertainties about predictive models
Gutzwiller, K.J.; Barrow, W.C.
2001-01-01
During the springs of 1995-1997, we studied birds and landscapes in the Chihuahuan Desert along part of the Texas-Mexico border. Our objectives were to assess bird-landscape relations and their interannual consistency and to identify ways to cope with associated uncertainties that undermine confidence in using such relations in conservation decision processes. Bird distributions were often significantly associated with landscape features, and many bird-landscape models were valid and useful for predictive purposes. Differences in early spring rainfall appeared to influence bird abundance, but there was no evidence that annual differences in bird abundance affected model consistency. Model consistency for richness (42%) was higher than mean model consistency for 26 focal species (mean 30%, range 0-67%), suggesting that relations involving individual species are, on average, more subject to factors that cause variation than are richness-landscape relations. Consistency of bird-landscape relations may be influenced by such factors as plant succession, exotic species invasion, bird species' tolerances for environmental variation, habitat occupancy patterns, and variation in food density or weather. The low model consistency that we observed for most species indicates the high variation in bird-landscape relations that managers and other decision makers may encounter. The uncertainty of interannual variation in bird-landscape relations can be reduced by using projections of bird distributions from different annual models to determine the likely range of temporal and spatial variation in a species' distribution. Stochastic simulation models can be used to incorporate the uncertainty of random environmental variation into predictions of bird distributions based on bird-landscape relations and to provide probabilistic projections with which managers can weigh the costs and benefits of various decisions, Uncertainty about the true structure of bird-landscape relations (structural uncertainty) can be reduced by ensuring that models meet important statistical assumptions, designing studies with sufficient statistical power, validating the predictive ability of models, and improving model accuracy through continued field sampling and model fitting. Un certainty associated with sampling variation (partial observability) can be reduced by ensuring that sample sizes are large enough to provide precise estimates of both bird and landscape parameters. By decreasing the uncertainty due to partial observability, managers will improve their ability to reduce structural uncertainty.
Reduction of uncertainty in global black carbon direct radiative forcing constrained by observations
NASA Astrophysics Data System (ADS)
Wang, R.; Balkanski, Y.; Boucher, O.; Ciais, P.; Schuster, G. L.; Chevallier, F.; Samset, B. H.; Valari, M.; Liu, J.; Tao, S.
2017-12-01
Black carbon (BC) absorbs sunlight and contributes to global warming. However, the size of this effect, namely the direct radiative forcing (DRF), ranges from +0.1 to +1.0 W m-2, largely due to discrepancies between modeled and observed BC radiation absorption. Studies that adjusted emissions to correct biases of models resulted in a revised upward estimate of the BC DRF. However, the observation-based BC RF was not optimized against observations in a rigorous mathematical manner, because uncertainties in emissions and the representativeness errors due to use of coarse-resolution models were not fully assessed. Here we simulated the absorption of solar radiation by BC from all sources at the 10-km resolution by combining a nested aerosol model with a downscaling method. The normalized mean bias in BC radiation absorption was reduced from -51% to -24% in Asia and from -57% to -50% elsewhere. We applied a Bayesian method that account for model, representativeness and observational uncertainties to estimate the BC RF and its uncertainty. Using the high-resolution model reduces uncertainty in BC DRF from -101%/+152% to -70%/+71% over Asia and from -83%/+108% to -64%/+68% over other continental regions. We derived an observation-based BC DRF of 0.61 Wm-2 (0.16 to 1.40 as 90% confidence) as our best estimate.
NASA Astrophysics Data System (ADS)
Lu, Shasha; Guan, Xingliang; Zhou, Min; Wang, Yang
2014-05-01
A large number of mathematical models have been developed to support land resource allocation decisions and land management needs; however, few of them can address various uncertainties that exist in relation to many factors presented in such decisions (e.g., land resource availabilities, land demands, land-use patterns, and social demands, as well as ecological requirements). In this study, a multi-objective interval-stochastic land resource allocation model (MOISLAM) was developed for tackling uncertainty that presents as discrete intervals and/or probability distributions. The developed model improves upon the existing multi-objective programming and inexact optimization approaches. The MOISLAM not only considers economic factors, but also involves food security and eco-environmental constraints; it can, therefore, effectively reflect various interrelations among different aspects in a land resource management system. Moreover, the model can also help examine the reliability of satisfying (or the risk of violating) system constraints under uncertainty. In this study, the MOISLAM was applied to a real case of long-term urban land resource allocation planning in Suzhou, in the Yangtze River Delta of China. Interval solutions associated with different risk levels of constraint violation were obtained. The results are considered useful for generating a range of decision alternatives under various system conditions, and thus helping decision makers to identify a desirable land resource allocation strategy under uncertainty.
NASA Astrophysics Data System (ADS)
Bartlett, Rachel E.; Bollasina, Massimo A.; Booth, Ben B. B.; Dunstone, Nick J.; Marenco, Franco; Messori, Gabriele; Bernie, Dan J.
2018-03-01
Anthropogenic aerosols could dominate over greenhouse gases in driving near-term hydroclimate change, especially in regions with high present-day aerosol loading such as Asia. Uncertainties in near-future aerosol emissions represent a potentially large, yet unexplored, source of ambiguity in climate projections for the coming decades. We investigated the near-term sensitivity of the Asian summer monsoon to aerosols by means of transient modelling experiments using HadGEM2-ES under two existing climate change mitigation scenarios selected to have similar greenhouse gas forcing, but to span a wide range of plausible global sulfur dioxide emissions. Increased sulfate aerosols, predominantly from East Asian sources, lead to large regional dimming through aerosol-radiation and aerosol-cloud interactions. This results in surface cooling and anomalous anticyclonic flow over land, while abating the western Pacific subtropical high. The East Asian monsoon circulation weakens and precipitation stagnates over Indochina, resembling the observed southern-flood-northern-drought pattern over China. Large-scale circulation adjustments drive suppression of the South Asian monsoon and a westward extension of the Maritime Continent convective region. Remote impacts across the Northern Hemisphere are also generated, including a northwestward shift of West African monsoon rainfall induced by the westward displacement of the Indian Ocean Walker cell, and temperature anomalies in northern midlatitudes linked to propagation of Rossby waves from East Asia. These results indicate that aerosol emissions are a key source of uncertainty in near-term projection of regional and global climate; a careful examination of the uncertainties associated with aerosol pathways in future climate assessments must be highly prioritised.
Uncertainty in gridded CO 2 emissions estimates
Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...
2016-05-19
We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less
A framework for modeling uncertainty in regional climate change
In this study, we present a new modeling framework and a large ensemble of climate projections to investigate the uncertainty in regional climate change over the United States associated with four dimensions of uncertainty. The sources of uncertainty considered in this framework ...
Wegge, Robin; McLinden, Mark O.; Perkins, Richard A.; Richter, Markus; Span, Roland
2016-01-01
The speed of sound of two (argon + carbon dioxide) mixtures was measured over the temperature range from (275 to 500) K with pressures up to 8 MPa utilizing a spherical acoustic resonator. The compositions of the gravimetrically prepared mixtures were (0.50104 and 0.74981) mole fraction carbon dioxide. The vibrational relaxation of pure carbon dioxide led to high sound absorption, which significantly impeded the sound-speed measurements on carbon dioxide and its mixtures; pre-condensation may have also affected the results for some measurements near the dew line. Thus, in contrast to the standard operating procedure for speed-of-sound measurements with a spherical resonator, non-radial resonances at lower frequencies were taken into account. Still, the data show a comparatively large scatter, and the usual repeatability of this general type of instrument could not be realized with the present measurements. Nonetheless, the average relative combined expanded uncertainty (k = 2) in speed of sound ranged from (0.042 to 0.056)% for both mixtures, with individual state-point uncertainties increasing to 0.1%. These uncertainties are adequate for our intended purpose of evaluating thermodynamic models. The results are compared to a Helmholtz energy equation of state for carbon capture and storage applications; relative deviations of (−0.64 to 0.08)% for the (0.49896 argon + 0.50104 carbon dioxide) mixture, and of (−1.52 to 0.77)% for the (0.25019 argon + 0.74981 carbon dioxide) mixture were observed. PMID:27458321
An end-to-end assessment of range uncertainty in proton therapy using animal tissues.
Zheng, Yuanshui; Kang, Yixiu; Zeidan, Omar; Schreuder, Niek
2016-11-21
Accurate assessment of range uncertainty is critical in proton therapy. However, there is a lack of data and consensus on how to evaluate the appropriate amount of uncertainty. The purpose of this study is to quantify the range uncertainty in various treatment conditions in proton therapy, using transmission measurements through various animal tissues. Animal tissues, including a pig head, beef steak, and lamb leg, were used in this study. For each tissue, an end-to-end test closely imitating patient treatments was performed. This included CT scan simulation, treatment planning, image-guided alignment, and beam delivery. Radio-chromic films were placed at various depths in the distal dose falloff region to measure depth dose. Comparisons between measured and calculated doses were used to evaluate range differences. The dose difference at the distal falloff between measurement and calculation depends on tissue type and treatment conditions. The estimated range difference was up to 5, 6 and 4 mm for the pig head, beef steak, and lamb leg irradiation, respectively. Our study shows that the TPS was able to calculate proton range within about 1.5% plus 1.5 mm. Accurate assessment of range uncertainty in treatment planning would allow better optimization of proton beam treatment, thus fully achieving proton beams' superior dose advantage over conventional photon-based radiation therapy.
Agnan, Yannick; Le Dantec, Théo; Moore, Christopher W; Edwards, Grant C; Obrist, Daniel
2016-01-19
Despite 30 years of study, gaseous elemental mercury (Hg(0)) exchange magnitude and controls between terrestrial surfaces and the atmosphere still remain uncertain. We compiled data from 132 studies, including 1290 reported fluxes from more than 200,000 individual measurements, into a database to statistically examine flux magnitudes and controls. We found that fluxes were unevenly distributed, both spatially and temporally, with strong biases toward Hg-enriched sites, daytime and summertime measurements. Fluxes at Hg-enriched sites were positively correlated with substrate concentrations, but this was absent at background sites. Median fluxes over litter- and snow-covered soils were lower than over bare soils, and chamber measurements showed higher emission compared to micrometeorological measurements. Due to low spatial extent, estimated emissions from Hg-enriched areas (217 Mg·a(-1)) were lower than previous estimates. Globally, areas with enhanced atmospheric Hg(0) levels (particularly East Asia) showed an emerging importance of Hg(0) emissions accounting for half of the total global emissions estimated at 607 Mg·a(-1), although with a large uncertainty range (-513 to 1353 Mg·a(-1) [range of 37.5th and 62.5th percentiles]). The largest uncertainties in Hg(0) fluxes stem from forests (-513 to 1353 Mg·a(-1) [range of 37.5th and 62.5th percentiles]), largely driven by a shortage of whole-ecosystem fluxes and uncertain contributions of leaf-atmosphere exchanges, questioning to what degree ecosystems are net sinks or sources of atmospheric Hg(0).
NASA Astrophysics Data System (ADS)
Stello, Dennis; Chaplin, William J.; Bruntt, Hans; Creevey, Orlagh L.; García-Hernández, Antonio; Monteiro, Mario J. P. F. G.; Moya, Andrés; Quirion, Pierre-Olivier; Sousa, Sergio G.; Suárez, Juan-Carlos; Appourchaux, Thierry; Arentoft, Torben; Ballot, Jerome; Bedding, Timothy R.; Christensen-Dalsgaard, Jørgen; Elsworth, Yvonne; Fletcher, Stephen T.; García, Rafael A.; Houdek, Günter; Jiménez-Reyes, Sebastian J.; Kjeldsen, Hans; New, Roger; Régulo, Clara; Salabert, David; Toutain, Thierry
2009-08-01
For distant stars, as observed by the NASA Kepler satellite, parallax information is currently of fairly low quality and is not complete. This limits the precision with which the absolute sizes of the stars and their potential transiting planets can be determined by traditional methods. Asteroseismology will be used to aid the radius determination of stars observed during NASA's Kepler mission. We report on the recent asteroFLAG hare-and-hounds Exercise#2, where a group of "hares" simulated data of F-K main-sequence stars that a group of "hounds" sought to analyze, aimed at determining the stellar radii. We investigated stars in the range 9 < V < 15, both with and without parallaxes. We further test different uncertainties in T eff, and compare results with and without using asteroseismic constraints. Based on the asteroseismic large frequency spacing, obtained from simulations of 4 yr time series data from the Kepler mission, we demonstrate that the stellar radii can be correctly and precisely determined, when combined with traditional stellar parameters from the Kepler Input Catalogue. The radii found by the various methods used by each independent hound generally agree with the true values of the artificial stars to within 3%, when the large frequency spacing is used. This is 5-10 times better than the results where seismology is not applied. These results give strong confidence that radius estimation can be performed to better than 3% for solar-like stars using automatic pipeline reduction. Even when the stellar distance and luminosity are unknown we can obtain the same level of agreement. Given the uncertainties used for this exercise we find that the input log g and parallax do not help to constrain the radius, and that T eff and metallicity are the only parameters we need in addition to the large frequency spacing. It is the uncertainty in the metallicity that dominates the uncertainty in the radius.
NASA Astrophysics Data System (ADS)
Habib, Gazala; Venkataraman, Chandra; Shrivastava, Manish; Banerjee, Rangan; Stehr, J. W.; Dickerson, Russell R.
2004-09-01
The dominance of biofuel combustion emissions in the Indian region, and the inherently large uncertainty in biofuel use estimates based on cooking energy surveys, prompted the current work, which develops a new methodology for estimating biofuel consumption for cooking. This is based on food consumption statistics, and the specific energy for food cooking. Estimated biofuel consumption in India was 379 (247-584) Tg yr-1. New information on the user population of different biofuels was compiled at a state level, to derive the biofuel mix, which varied regionally and was 74:16:10%, respectively, of fuelwood, dung cake and crop waste, at a national level. Importantly, the uncertainty in biofuel use from quantitative error assessment using the new methodology is around 50%, giving a narrower bound than in previous works. From this new activity data and currently used black carbon emission factors, the black carbon (BC) emissions from biofuel combustion were estimated as 220 (65-760) Gg yr-1. The largest BC emissions were from fuelwood (75%), with lower contributions from dung cake (16%) and crop waste (9%). The uncertainty of 245% in the BC emissions estimate is now governed by the large spread in BC emission factors from biofuel combustion (122%), implying the need for reducing this uncertainty through measurements. Emission factors of SO2 from combustion of biofuels widely used in India were measured, and ranged 0.03-0.08 g kg-1 from combustion of two wood species, 0.05-0.20 g kg-1 from 10 crop waste types, and 0.88 g kg-1 from dung cake, significantly lower than currently used emission factors for wood and crop waste. Estimated SO2 emissions from biofuels of 75 (36-160) Gg yr-1 were about a factor of 3 lower than that in recent studies, with a large contribution from dung cake (73%), followed by fuelwood (21%) and crop waste (6%).
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; de Bruijn, Karin; Bouwer, Laurens; de Moel, Hans
2015-04-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. This Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. This uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.
2015-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Functional variability of habitats within the Sacramento-San Joaquin Delta: Restoration implications
Lucas, L.V.; Cloern, J.E.; Thompson, J.K.; Monsen, N.E.
2002-01-01
We have now entered an era of large-scale attempts to restore ecological functions and biological communities in impaired ecosystems. Our knowledge base of complex ecosystems and interrelated functions is limited, so the outcomes of specific restoration actions are highly uncertain. One approach for exploring that uncertainty and anticipating the range of possible restoration outcomes is comparative study of existing habitats similar to future habitats slated for construction. Here we compare two examples of one habitat type targeted for restoration in the Sacramento-San Joaquin River Delta. We compare one critical ecological function provided by these shallow tidal habitats - production and distribution of phytoplankton biomass as the food supply to pelagic consumers. We measured spatial and short-term temporal variability of phytoplankton biomass and growth rate and quantified the hydrodynamic and biological processes governing that variability. Results show that the production and distribution of phytoplankton biomass can be highly variable within and between nearby habitats of the same type, due to variations in phytoplankton sources, sinks, and transport. Therefore, superficially similar, geographically proximate habitats can function very differently, and that functional variability introduces large uncertainties into the restoration process. Comparative study of existing habitats is one way ecosystem science can elucidate and potentially minimize restoration uncertainties, by identifying processes shaping habitat functionality, including those that can be controlled in the restoration design.
Calibration of Radiation Thermometers up to : Effective Emissivity of the Source
NASA Astrophysics Data System (ADS)
Kozlova, O.; Briaudeau, S.; Rongione, L.; Bourson, F.; Guimier, S.; Kosmalski, S.; Sadli, M.
2015-08-01
The growing demand of industry for traceable temperature measurements up to encourages improvement of calibration techniques for industrial-type radiation thermometers in this temperature range. High-temperature fixed points can be used at such high temperatures, but due to the small diameter of apertures of their cavities (3 mm), they are not adapted for the large field-of-views commonly featured by this kind of radiation thermometers. At LNE-Cnam, a Thermo Gauge furnace of 25.4 mm source aperture diameter is used as a comparison source to calibrate customers' instruments against a reference radiation thermometer calibrated according to the ITS-90 with the lowest uncertainties achievable in the Laboratory. But the furnace blackbody radiator exhibits a large temperature gradient that degrades its effective emissivity, and increases the calibration uncertainty due to the lack of information on the working spectral band of the industrial radiation thermometer. In order to estimate the corrections to apply, the temperature distribution (radial and on-axis) of the Thermo Gauge furnace blackbody radiator was characterized and the effective emissivity of the Thermo Gauge cavity was determined by three different methods. Because of this investigation, the corrections due to different fields of view and due to the different spectral bands of the reference pyrometer and the customer's pyrometer were obtained and the uncertainties on these corrections were evaluated.
Carbon-climate-human interactions in an integrated human-Earth system model
NASA Astrophysics Data System (ADS)
Calvin, K. V.; Bond-Lamberty, B. P.; Jones, A. D.; Shi, X.
2016-12-01
The C4MIP and CMIP5 results highlighted large uncertainties in climate projections, driven to a large extent by limited understanding of the interactions between terrestrial carbon-cycle and climate feedbacks, and their associated uncertainties. These feedbacks are dominated by uncertainties in soil processes, disturbance dynamics, ecosystem response to climate change, and agricultural productivity, and land-use change. This research addresses three questions: (1) how do terrestrial feedbacks vary across different levels of climate change, (2) what is the relative contribution of CO2 fertilization and climate change, and (3) how robust are the results across different models and methods? We used a coupled modeling framework that integrates an Integrated Assessment Model (modeling economic and energy activity) with an Earth System Model (modeling the natural earth system) to examine how business-as-usual (RCP 8.5) climate change will affect ecosystem productivity, cropland extent, and other aspects of the human-Earth system. We find that higher levels of radiative forcing result in higher productivity growth, that increases in CO2 concentrations are the dominant contributors to that growth, and that our productivity increases fall in the middle of the range when compared to other CMIP5 models and the AgMIP models. These results emphasize the importance of examining both the anthropogenic and natural components of the earth system, and their long-term interactive feedbacks.
NASA Astrophysics Data System (ADS)
Guimberteau, Matthieu; Ciais, Philippe; Ducharne, Agnès; Boisier, Juan Pablo; Dutra Aguiar, Ana Paula; Biemans, Hester; De Deurwaerder, Hannes; Galbraith, David; Kruijt, Bart; Langerwisch, Fanny; Poveda, German; Rammig, Anja; Andres Rodriguez, Daniel; Tejada, Graciela; Thonicke, Kirsten; Von Randow, Celso; Von Randow, Rita C. S.; Zhang, Ke; Verbeeck, Hans
2017-03-01
Deforestation in Amazon is expected to decrease evapotranspiration (ET) and to increase soil moisture and river discharge under prevailing energy-limited conditions. The magnitude and sign of the response of ET to deforestation depend both on the magnitude and regional patterns of land-cover change (LCC), as well as on climate change and CO2 levels. On the one hand, elevated CO2 decreases leaf-scale transpiration, but this effect could be offset by increased foliar area density. Using three regional LCC scenarios specifically established for the Brazilian and Bolivian Amazon, we investigate the impacts of climate change and deforestation on the surface hydrology of the Amazon Basin for this century, taking 2009 as a reference. For each LCC scenario, three land surface models (LSMs), LPJmL-DGVM, INLAND-DGVM and ORCHIDEE, are forced by bias-corrected climate simulated by three general circulation models (GCMs) of the IPCC 4th Assessment Report (AR4). On average, over the Amazon Basin with no deforestation, the GCM results indicate a temperature increase of 3.3 °C by 2100 which drives up the evaporative demand, whereby precipitation increases by 8.5 %, with a large uncertainty across GCMs. In the case of no deforestation, we found that ET and runoff increase by 5.0 and 14 %, respectively. However, in south-east Amazonia, precipitation decreases by 10 % at the end of the dry season and the three LSMs produce a 6 % decrease of ET, which is less than precipitation, so that runoff decreases by 22 %. For instance, the minimum river discharge of the Rio Tapajós is reduced by 31 % in 2100. To study the additional effect of deforestation, we prescribed to the LSMs three contrasted LCC scenarios, with a forest decline going from 7 to 34 % over this century. All three scenarios partly offset the climate-induced increase of ET, and runoff increases over the entire Amazon. In the south-east, however, deforestation amplifies the decrease of ET at the end of dry season, leading to a large increase of runoff (up to +27 % in the extreme deforestation case), offsetting the negative effect of climate change, thus balancing the decrease of low flows in the Rio Tapajós. These projections are associated with large uncertainties, which we attribute separately to the differences in LSMs, GCMs and to the uncertain range of deforestation. At the subcatchment scale, the uncertainty range on ET changes is shown to first depend on GCMs, while the uncertainty of runoff projections is predominantly induced by LSM structural differences. By contrast, we found that the uncertainty in both ET and runoff changes attributable to uncertain future deforestation is low.
NASA Astrophysics Data System (ADS)
Roobaert, Alizee; Laruelle, Goulven; Landschützer, Peter; Regnier, Pierre
2017-04-01
In lakes, rivers, estuaries and the ocean, the quantification of air-water CO2 exchange (FCO2) is still characterized by large uncertainties partly due to the lack of agreement over the parameterization of the gas exchange velocity (k). Although the ocean is generally regarded as the best constrained system because k is only controlled by the wind speed, numerous formulations are still currently used, leading to potentially large differences in FCO2. Here, a quantitative global spatial analysis of FCO2 is presented using several k-wind speed formulations in order to compare the effect of the choice of parameterization of k on FCO2. This analysis is performed at a 1 degree resolution using a sea surface pCO2 product generated using a two-step artificial neuronal network by Landschützer et al. (2015) over the 1991-2011 period. Four different global wind speed datasets (CCMP, ERA, NCEP 1 and NCEP 2) are also used to assess the effect of the choice of one wind speed product over the other when calculating the global and regional oceanic FCO2. Results indicate that this choice of wind speed product only leads to small discrepancies globally (6 %) except with NCEP 2 which produces a more intense global FCO2 compared to the other wind products. Regionally, theses differences are even more pronounced. For a given wind speed product, the choice of parametrization of k yields global FCO2 differences ranging from 7 % to 16 % depending on the wind product used. We also provide latitudinal profiles of FCO2 and its uncertainty calculated combining all combinations between the different k-relationships and the four wind speed products. Wind speeds >14 m s-1, which only account for 7 % of all observations, contributes disproportionately to the global oceanic FCO2 and, for this range of wind speeds, the uncertainty induced by the choice of formulation for k is maximum ( 50 %).
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
Uncertainty-accounting environmental policy and management of water systems.
Baresel, Christian; Destouni, Georgia
2007-05-15
Environmental policies for water quality and ecosystem management do not commonly require explicit stochastic accounts of uncertainty and risk associated with the quantification and prediction of waterborne pollutant loads and abatement effects. In this study, we formulate and investigate a possible environmental policy that does require an explicit stochastic uncertainty account. We compare both the environmental and economic resource allocation performance of such an uncertainty-accounting environmental policy with that of deterministic, risk-prone and risk-averse environmental policies under a range of different hypothetical, yet still possible, scenarios. The comparison indicates that a stochastic uncertainty-accounting policy may perform better than deterministic policies over a range of different scenarios. Even in the absence of reliable site-specific data, reported literature values appear to be useful for such a stochastic account of uncertainty.
Using demography and movement behavior to predict range expansion of the southern sea otter.
Tinker, M.T.; Doak, D.F.; Estes, J.A.
2008-01-01
In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebl, Jakob, E-mail: jakob.liebl@medaustron.at; Francis H. Burr Proton Therapy Center, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114; Department of Therapeutic Radiology and Oncology, Medical University of Graz, 8036 Graz
2014-09-15
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patientmore » positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R{sup 2} < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe.« less
Liebl, Jakob; Paganetti, Harald; Zhu, Mingyao; Winey, Brian A.
2014-01-01
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patient positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R2 < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe. PMID:25186386
Operationalising uncertainty in data and models for integrated water resources management.
Blind, M W; Refsgaard, J C
2007-01-01
Key sources of uncertainty of importance for water resources management are (1) uncertainty in data; (2) uncertainty related to hydrological models (parameter values, model technique, model structure); and (3) uncertainty related to the context and the framing of the decision-making process. The European funded project 'Harmonised techniques and representative river basin data for assessment and use of uncertainty information in integrated water management (HarmoniRiB)' has resulted in a range of tools and methods to assess such uncertainties, focusing on items (1) and (2). The project also engaged in a number of discussions surrounding uncertainty and risk assessment in support of decision-making in water management. Based on the project's results and experiences, and on the subsequent discussions a number of conclusions can be drawn on the future needs for successful adoption of uncertainty analysis in decision support. These conclusions range from additional scientific research on specific uncertainties, dedicated guidelines for operational use to capacity building at all levels. The purpose of this paper is to elaborate on these conclusions and anchoring them in the broad objective of making uncertainty and risk assessment an essential and natural part in future decision-making processes.
NASA Astrophysics Data System (ADS)
Ritchie, W. J.; Dowlatabadi, H.
2017-12-01
Climate change modeling relies on projections of future greenhouse gas emissions and other phenomena leading to changes in planetary radiative forcing (RF). Pathways for long-run fossil energy use that map to total forcing outcomes are commonly depicted with integrated assessment models (IAMs). IAMs structure outlooks for 21st-century emissions with various theories for developments in demographics, economics, land-use, energy markets and energy service demands. These concepts are applied to understand global changes in two key factors relevant for scenarios of carbon emissions: total energy use (E) this century and the carbon intensity of that energy (F/E). A simple analytical and graphical approach can also illustrate the full range of outcomes for these variables to determine if IAMs provide sufficient coverage of the uncertainty space for future energy use. In this talk, we present a method for understanding uncertainties relevant to RF scenario components in a phase space. The phase space of a dynamic system represents significant factors as axes to capture the full range of physically possible states. A two-dimensional phase space of E and F/E presents the possible system states that can lead to various levels of total 21st-century carbon emissions. Once defined in this way, a phase space of these energy system coordinates allows for rapid characterization of large IAM scenario sets with machine learning techniques. This phase space method is applied to the levels of RF described by the Representative Concentration Pathways (RCPs). The resulting RCP phase space identifies characteristics of the baseline energy system outlooks provided by IAMs for IPCC Working Group III. We conduct a k-means cluster analysis to distinguish the major features of IAM scenarios for each RCP range. Cluster analysis finds the IAM scenarios in AR5 illustrate RCPs with consistent combinations of energy resources. This suggests IAM scenarios understate uncertainty ranges for future fossil energy combustion and are overly constrained, implying it is likely easier to achieve a 1.5˚ climate policy goal than previously demonstrated.
Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR
NASA Astrophysics Data System (ADS)
Dammers, Enrico; Shephard, Mark W.; Palm, Mathias; Cady-Pereira, Karen; Capps, Shannon; Lutsch, Erik; Strong, Kim; Hannigan, James W.; Ortega, Ivan; Toon, Geoffrey C.; Stremme, Wolfgang; Grutter, Michel; Jones, Nicholas; Smale, Dan; Siemons, Jacob; Hrpcek, Kevin; Tremblay, Denis; Schaap, Martijn; Notholt, Justus; Erisman, Jan Willem
2017-07-01
Presented here is the validation of the CrIS (Cross-track Infrared Sounder) fast physical NH3 retrieval (CFPR) column and profile measurements using ground-based Fourier transform infrared (FTIR) observations. We use the total columns and profiles from seven FTIR sites in the Network for the Detection of Atmospheric Composition Change (NDACC) to validate the satellite data products. The overall FTIR and CrIS total columns have a positive correlation of r = 0.77 (N = 218) with very little bias (a slope of 1.02). Binning the comparisons by total column amounts, for concentrations larger than 1.0 × 1016 molecules cm-2, i.e. ranging from moderate to polluted conditions, the relative difference is on average ˜ 0-5 % with a standard deviation of 25-50 %, which is comparable to the estimated retrieval uncertainties in both CrIS and the FTIR. For the smallest total column range (< 1.0 × 1016 molecules cm-2) where there are a large number of observations at or near the CrIS noise level (detection limit) the absolute differences between CrIS and the FTIR total columns show a slight positive column bias. The CrIS and FTIR profile comparison differences are mostly within the range of the single-level retrieved profile values from estimated retrieval uncertainties, showing average differences in the range of ˜ 20 to 40 %. The CrIS retrievals typically show good vertical sensitivity down into the boundary layer which typically peaks at ˜ 850 hPa (˜ 1.5 km). At this level the median absolute difference is 0.87 (std = ±0.08) ppb, corresponding to a median relative difference of 39 % (std = ±2 %). Most of the absolute and relative profile comparison differences are in the range of the estimated retrieval uncertainties. At the surface, where CrIS typically has lower sensitivity, it tends to overestimate in low-concentration conditions and underestimate in higher atmospheric concentration conditions.
Preparing Teachers for Uncertainty.
ERIC Educational Resources Information Center
Floden, Robert E.; Clark, Christopher M.
An examination of the various ways in which teaching is uncertain and how uncertainty pervades teachers' lives points out that teachers face uncertainties in their instructional content, ranging from difficult concepts, to unclarity about how teaching might be improved. These forms of uncertainty undermine teachers' authority, creating situations…
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.;
2012-01-01
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy -ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the Instrument Response Functions (IRFs), the description of the instrument performance provided for data analysis. In this paper we describe the effects that motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. Finally, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.
Observational uncertainty and regional climate model evaluation: A pan-European perspective
NASA Astrophysics Data System (ADS)
Kotlarski, Sven; Szabó, Péter; Herrera, Sixto; Räty, Olle; Keuler, Klaus; Soares, Pedro M.; Cardoso, Rita M.; Bosshard, Thomas; Pagé, Christian; Boberg, Fredrik; Gutiérrez, José M.; Jaczewski, Adam; Kreienkamp, Frank; Liniger, Mark. A.; Lussana, Cristian; Szepszo, Gabriella
2017-04-01
Local and regional climate change assessments based on downscaling methods crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling via regional climate models (RCMs) observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. Focusing on the evaluation of RCMs, we here analyze the influence of uncertainties in observational reference data on evaluation results in a well-defined performance assessment framework and on a European scale. For this purpose we employ three different gridded observational reference grids, namely (1) the well-established EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. In terms of climate models five reanalysis-driven experiments carried out by five different RCMs within the EURO-CORDEX framework are used. Two variables (temperature and precipitation) and a range of evaluation metrics that reflect different aspects of RCM performance are considered. We furthermore include an illustrative model ranking exercise and relate observational spread to RCM spread. The results obtained indicate a varying influence of observational uncertainty on model evaluation depending on the variable, the season, the region and the specific performance metric considered. Over most parts of the continent, the influence of the choice of the reference dataset for temperature is rather small for seasonal mean values and inter-annual variability. Here, model uncertainty (as measured by the spread between the five RCM simulations considered) is typically much larger than reference data uncertainty. For parameters of the daily temperature distribution and for the spatial pattern correlation, however, important dependencies on the reference dataset can arise. The related evaluation uncertainties can be as large or even larger than model uncertainty. For precipitation the influence of observational uncertainty is, in general, larger than for temperature. It often dominates model uncertainty especially for the evaluation of the wet day frequency, the spatial correlation and the shape and location of the distribution of daily values. But even the evaluation of large-scale seasonal mean values can be considerably affected by the choice of the reference. When employing a simple and illustrative model ranking scheme on these results it is found that RCM ranking in many cases depends on the reference dataset employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmiotti, Giuseppe; Salvatores, Massimo; Hursin, Mathieu
2016-11-01
A critical examination of the role of uncertainty assessment, target accuracies, role of integral experiment for validation and, consequently, of data adjustments methods is underway since several years at OECD-NEA, the objective being to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and experimentalists in order to improve without ambiguities the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications and to meet new requirements and constraints for innovative reactor and fuel cycle system design. An approach will bemore » described that expands as much as possible the use in the adjustment procedure of selected integral experiments that provide information on “elementary” phenomena, on separated individual physics effects related to specific isotopes or on specific energy ranges. An application to a large experimental data base has been performed and the results are discussed in the perspective of new evaluation projects like the CIELO initiative.« less
NASA Technical Reports Server (NTRS)
Morton, Douglas; Souza, Carlos, Jr.; Souza, Carlos, Jr.; Keller, Michael
2012-01-01
Large-scale tropical forest monitoring efforts in support of REDD+ (Reducing Emissions from Deforestation and forest Degradation plus enhancing forest carbon stocks) confront a range of challenges. REDD+ activities typically have short reporting time scales, diverse data needs, and low tolerance for uncertainties. Meeting these challenges will require innovative use of remote sensing data, including integrating data at different spatial and temporal resolutions. The global scientific community is engaged in developing, evaluating, and applying new methods for regional to global scale forest monitoring. Pilot REDD+ activities are underway across the tropics with support from a range of national and international groups, including SilvaCarbon, an interagency effort to coordinate US expertise on forest monitoring and resource management. Early actions on REDD+ have exposed some of the inherent tradeoffs that arise from the use of incomplete or inaccurate data to quantify forest area changes and related carbon emissions. Here, we summarize recent advances in forest monitoring to identify and target the main sources of uncertainty in estimates of forest area changes, aboveground carbon stocks, and Amazon forest carbon emissions.
NASA Astrophysics Data System (ADS)
Palmiotti, Giuseppe; Salvatores, Massimo; Hursin, Mathieu; Kodeli, Ivo; Gabrielli, Fabrizio; Hummel, Andrew
2017-09-01
A critical examination of the role of uncertainty assessment, target accuracies, role of integral experiment for validation and, consequently, of data adjustments methods is underway since several years at OECD-NEA, the objective being to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and experimentalists in order to improve without ambiguities the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications and to meet new requirements and constraints for innovative reactor and fuel cycle system design. An approach will be described that expands as much as possible the use in the adjustment procedure of selected integral experiments that provide information on "elementary" phenomena, on separated individual physics effects related to specific isotopes or on specific energy ranges. An application to a large experimental data base has been performed and the results are discussed in the perspective of new evaluation projects like the CIELO initiative.
Moral Distress in PICU and Neonatal ICU Practitioners: A Cross-Sectional Evaluation.
Larson, Charles Philip; Dryden-Palmer, Karen D; Gibbons, Cathy; Parshuram, Christopher S
2017-08-01
To measure the level of moral distress in PICU and neonatal ICU health practitioners, and to describe the relationship of moral distress with demographic factors, burnout, and uncertainty. Cross-sectional survey. A large pediatric tertiary care center. Neonatal ICU and PICU health practitioners with at least 3 months of ICU experience. A 41-item questionnaire examining moral distress, burnout, and uncertainty. The main outcome was moral distress measured with the Revised Moral Distress Scale. Secondary outcomes were frequency and intensity Revised Moral Distress Scale subscores, burnout measured with the Maslach Burnout Inventory depersonalization subscale, and uncertainty measured with questions adapted from Mishel's Parent Perception of Uncertainty Scale. Linear regression models were used to examine associations between participant characteristics and the measures of moral distress, burnout, and uncertainty. Two-hundred six analyzable surveys were returned. The median Revised Moral Distress Scale score was 96.5 (interquartile range, 69-133), and 58% of respondents reported significant work-related moral distress. Revised Moral Distress Scale items involving end-of-life care and communication scored highest. Moral distress was positively associated with burnout (r = 0.27; p < 0.001) and uncertainty (r = 0.04; p = 0.008) and inversely associated with perceived hospital supportiveness (r = 0.18; p < 0.001). Nurses reported higher moral distress intensity than physicians (Revised Moral Distress Scale intensity subscores: 57.3 vs 44.7; p = 0.002). In nurses only, moral distress was positively associated with increasing years of ICU experience (p = 0.02) and uncertainty about whether their care was of benefit (r = 0.11; p < 0.001) and inversely associated with uncertainty about a child's prognosis (r = 0.03; p = 0.03). In this single-center, cross-sectional study, we found that moral distress is present in PICU and neonatal ICU health practitioners and is correlated with burnout, uncertainty, and feeling unsupported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, L; Soldner, A; Kirk, M
Purpose: The beam range uncertainty presents a special challenge for proton therapy. Novel technologies currently under development offer strategies to reduce the range uncertainty [1,2]. This work quantifies the potential advantages that could be realized by such a reduction for dosimetrically challenging chordomas at the base of skull. Therapeutic improvement was assessed by evaluating tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP). Methods: Treatment plans were made for a modulated-scanned proton delivery technique using the Eclipse treatment planning system. The prescription dose was 7920 cGy to the CTV. Three different range uncertainty scenarios were considered: 5 mm (3.5%more » of the beam range + 1 mm, representing current clinical practice, “Curr”), 2 mm (1.3%), and 1 mm (0.7%). For each of 4 patients, 3 different PTVs were defined via uniform expansion of the CTV by the value of the range uncertainty. Tumor control probability (TCP) and normal tissue complication probabilities (NTCPs) for organs-at-risk (OARs) were calculated using the Lyman-Kutcher-Burman[3] formalism and published model parameters [ref Terahara[4], quantec S10, Burman Red Journal v21 pp 123]. Our plan optimization strategy was to achieve PTV close to prescription while maintaining OAR NTCP values at or better than the Curr plan. Results: The average TCP values for the 5, 2, and 1 mm range uncertainty scenarios are 51%, 55% and 65%. The improvement in TCP for patients was between 4 and 30%, depending primarily on the proximity of the GTV to OAR. The average NTCPs for the brainstem and cord were about 4% and 1%, respectively, for all target margins. Conclusion: For base of skull chordomas, reduced target margins can substantially increase the TCP without increasing the NTCP. This work demonstrates the potential significance of a reduction in the range uncertainty for proton beams.« less
Affective decision making under uncertainty during a plausible aviation task: an fMRI study.
Causse, Mickaël; Péran, Patrice; Dehais, Frédéric; Caravasso, Chiara Falletta; Zeffiro, Thomas; Sabatini, Umberto; Pastor, Josette
2013-05-01
In aeronautics, plan continuation error (PCE) represents failure to revise a flight plan despite emerging evidence suggesting that it is no longer safe. Assuming that PCE may be associated with a shift from cold to hot reasoning, we hypothesized that this transition may result from a large range of strong negative emotional influences linked with the decision to abort a landing and circle for a repeat attempt, referred to as a "go-around". We investigated this hypothesis by combining functional neuroimaging with an ecologically valid aviation task performed under contextual variation in incentive and situational uncertainty. Our goal was to identify regional brain activity related to the sorts of conservative or liberal decision-making strategies engaged when participants were both exposed to a financial payoff matrix constructed to bias responses in favor of landing acceptance, while they were simultaneously experiencing maximum levels of uncertainty related to high levels of stimulus ambiguity. Combined with the observed behavioral outcomes, our neuroimaging results revealed a shift from cold to hot decision making in response to high uncertainty when participants were exposed to the financial incentive. Most notably, while we observed activity increases in response to uncertainty in many frontal regions such as dorsolateral prefrontal cortex (DLPFC) and anterior cingulate cortex (ACC), less overall activity was observed when the reward was combined with uncertainty. Moreover, participants with poor decision making, quantified as a lower discriminability index d', exhibited riskier behavior coupled with lower activity in the right DLPFC. These outcomes suggest a disruptive effect of biased financial incentive and high uncertainty on the rational decision-making neural network, and consequently, on decision relevance. Copyright © 2013 Elsevier Inc. All rights reserved.
GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.
2014-01-01
This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Court, L
Purpose: Despite daily image guidance setup uncertainties can be high when treating large areas of the body. The aim of this study was to measure local uncertainties inside the PTV for patients receiving IMRT to the mediastinum region. Methods: Eleven lymphoma patients that received radiotherapy (breath-hold) to the mediastinum were included in this study. The treated region could range all the way from the neck to the diaphragm. Each patient had a CT scan with a CT-on-rails system prior to every treatment. The entire PTV region was matched to the planning CT using automatic rigid registration. The PTV was thenmore » split into 5 regions: neck, supraclavicular, superior mediastinum, upper heart, lower heart. Additional auto-registrations for each of the 5 local PTV regions were performed. The residual local setup errors were calculated as the difference between the final global PTV position and the individual final local PTV positions for the AP, SI and RL directions. For each patient 4 CT scans were analyzed (1 per week of treatment). Results: The residual mean group error (M) and standard deviation of the inter-patient (or systematic) error (Σ) were lowest in the RL direction of the superior mediastinum (0.0mm and 0.5mm) and highest in the RL direction of the lower heart (3.5mm and 2.9mm). The standard deviation of the inter-fraction (or random) error (σ) was lowest in the RL direction of the superior mediastinum (0.5mm) and highest in the SI direction of the lower heart (3.9mm) The directionality of local uncertainties is important; a superior residual error in the lower heart for example keeps it in the global PTV. Conclusion: There is a complex relationship between breath-holding and positioning uncertainties that needs further investigation. Residual setup uncertainties can be significant even under daily CT image guidance when treating large regions of the body.« less
Signal detection in global mean temperatures after "Paris": an uncertainty and sensitivity analysis
NASA Astrophysics Data System (ADS)
Visser, Hans; Dangendorf, Sönke; van Vuuren, Detlef P.; Bregman, Bram; Petersen, Arthur C.
2018-02-01
In December 2015, 195 countries agreed in Paris to hold the increase in global mean surface temperature (GMST) well below 2.0 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C
. Since large financial flows will be needed to keep GMSTs below these targets, it is important to know how GMST has progressed since pre-industrial times. However, the Paris Agreement is not conclusive as regards methods to calculate it. Should trend progression be deduced from GCM simulations or from instrumental records by (statistical) trend methods? Which simulations or GMST datasets should be chosen, and which trend models? What is pre-industrial
and, finally, are the Paris targets formulated for total warming, originating from both natural and anthropogenic forcing, or do they refer to anthropogenic warming only? To find answers to these questions we performed an uncertainty and sensitivity analysis where datasets and model choices have been varied. For all cases we evaluated trend progression along with uncertainty information. To do so, we analysed four trend approaches and applied these to the five leading observational GMST products. We find GMST progression to be largely independent of various trend model approaches. However, GMST progression is significantly influenced by the choice of GMST datasets. Uncertainties due to natural variability are largest in size. As a parallel path, we calculated GMST progression from an ensemble of 42 GCM simulations. Mean progression derived from GCM-based GMSTs appears to lie in the range of trend-dataset combinations. A difference between both approaches appears to be the width of uncertainty bands: GCM simulations show a much wider spread. Finally, we discuss various choices for pre-industrial baselines and the role of warming definitions. Based on these findings we propose an estimate for signal progression in GMSTs since pre-industrial.
Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel
2014-11-01
With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.
The effects of He I λ10830 on helium abundance determinations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aver, Erik; Olive, Keith A.; Skillman, Evan D., E-mail: aver@gonzaga.edu, E-mail: olive@umn.edu, E-mail: skillman@astro.umn.edu
2015-07-01
Observations of helium and hydrogen emission lines from metal-poor extragalactic H II regions, combined with estimates of metallicity, provide an independent method for determining the primordial helium abundance, Y{sub p}. Traditionally, the emission lines employed are in the visible wavelength range, and the number of suitable lines is limited. Furthermore, when using these lines, large systematic uncertainties in helium abundance determinations arise due to the degeneracy of physical parameters, such as temperature and density. Recently, Izotov, Thuan, and Guseva (2014) have pioneered adding the He I λ10830 infrared emission line in helium abundance determinations. The strong electron density dependence ofmore » He I λ10830 makes it ideal for better constraining density, potentially breaking the degeneracy with temperature. We revisit our analysis of the dataset published by Izotov, Thuan, and Stasi and apos;nska (2007) and incorporate the newly available observations of He I λ10830 by scaling them using the observed-to-theoretical Paschen-gamma ratio. The solutions are better constrained, in particular for electron density, temperature, and the neutral hydrogen fraction, improving the model fit to data, with the result that more spectra now pass screening for quality and reliability, in addition to a standard 95% confidence level cut. Furthermore, the addition of He I λ10830 decreases the uncertainty on the helium abundance for all galaxies, with reductions in the uncertainty ranging from 10–80%. Overall, we find a reduction in the uncertainty on Y{sub p} by over 50%. From a regression to zero metallicity, we determine Y{sub p} = 0.2449 ± 0.0040, consistent with the BBN result, Y{sub p} = 0.2470 ± 0.0002, based on the Planck determination of the baryon density. The dramatic improvement in the uncertainty from incorporating He I λ10830 strongly supports the case for simultaneous (thus not requiring scaling) observations of visible and infrared helium emission line spectra.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flampouri, S; Li, Z; Hoppe, B
2015-06-15
Purpose: To develop a treatment planning method for passively-scattered involved-node proton therapy of mediastinal lymphoma robust to breathing and cardiac motions. Methods: Beam-specific planning treatment volumes (bsPTV) are calculated for each proton field to incorporate pertinent uncertainties. Geometric margins are added laterally to each beam while margins for range uncertainty due to setup errors, breathing, and calibration curve uncertainties are added along each beam. The calculation of breathing motion and deformation effects on proton range includes all 4DCT phases. The anisotropic water equivalent margins are translated to distances on average 4DCT. Treatment plans are designed so each beam adequately coversmore » the corresponding bsPTV. For targets close to the heart, cardiac motion effects on dosemaps are estimated by using a library of anonymous ECG-gated cardiac CTs (cCT). The cCT, originally contrast-enhanced, are partially overridden to allow meaningful proton dose calculations. Targets similar to the treatment targets are drawn on one or more cCT sets matching the anatomy of the patient. Plans based on the average cCT are calculated on individual phases, then deformed to the average and accumulated. When clinically significant dose discrepancies occur between planned and accumulated doses, the patient plan is modified to reduce the cardiac motion effects. Results: We found that bsPTVs as planning targets create dose distributions similar to the conventional proton planning distributions, while they are a valuable tool for visualization of the uncertainties. For large targets with variability in motion and depth, integral dose was reduced because of the anisotropic margins. In most cases, heart motion has a clinically insignificant effect on target coverage. Conclusion: A treatment planning method was developed and used for proton therapy of mediastinal lymphoma. The technique incorporates bsPTVs compensating for all common sources of uncertainties and estimation of the effects of cardiac motion not commonly performed.« less
Managing geological uncertainty in CO2-EOR reservoir assessments
NASA Astrophysics Data System (ADS)
Welkenhuysen, Kris; Piessens, Kris
2014-05-01
Recently the European Parliament has agreed that an atlas for the storage potential of CO2 is of high importance to have a successful commercial introduction of CCS (CO2 capture and geological storage) technology in Europe. CO2-enhanced oil recovery (CO2-EOR) is often proposed as a promising business case for CCS, and likely has a high potential in the North Sea region. Traditional economic assessments for CO2-EOR largely neglect the geological reality of reservoir uncertainties because these are difficult to introduce realistically in such calculations. There is indeed a gap between the outcome of a reservoir simulation and the input values for e.g. cost-benefit evaluations, especially where it concerns uncertainty. The approach outlined here is to turn the procedure around, and to start from which geological data is typically (or minimally) requested for an economic assessment. Thereafter it is evaluated how this data can realistically be provided by geologists and reservoir engineers. For the storage of CO2 these parameters are total and yearly CO2 injection capacity, and containment or potential on leakage. Specifically for the EOR operation, two additional parameters can be defined: the EOR ratio, or the ratio of recovered oil over injected CO2, and the CO2 recycling ratio of CO2 that is reproduced after breakthrough at the production well. A critical but typically estimated parameter for CO2-EOR projects is the EOR ratio, taken in this brief outline as an example. The EOR ratio depends mainly on local geology (e.g. injection per well), field design (e.g. number of wells), and time. Costs related to engineering can be estimated fairly good, given some uncertainty range. The problem is usually to reliably estimate the geological parameters that define the EOR ratio. Reliable data is only available from (onshore) CO2-EOR projects in the US. Published studies for the North Sea generally refer to these data in a simplified form, without uncertainty ranges, and are therefore not suited for cost-benefit analysis. They likely result in too optimistic results because onshore configurations are cheaper and different. We propose to translate the detailed US data to the North Sea, retaining their uncertainty ranges. In a first step, a general cost correction can be applied to account for costs specific to the EU and the offshore setting. In a second step site-specific data, including laboratory tests and reservoir modelling, are used to further adapt the EOR ratio values taking into account all available geological reservoir-specific knowledge. And lastly, an evaluation of the field configuration will have an influence on both the cost and local geology dimension, because e.g. horizontal drilling is needed (cost) to improve injectivity (geology). As such, a dataset of the EOR field is obtained which contains all aspects and their uncertainty ranges. With these, a geologically realistic basis is obtained for further cost-benefit analysis of a specific field, where the uncertainties are accounted for using a stochastic evaluation. Such ad-hoc evaluation of geological parameters will provide a better assessment of the CO2-EOR potential of the North Sea oil fields.
New Primary Standards for Establishing SI Traceability for Moisture Measurements in Solid Materials
NASA Astrophysics Data System (ADS)
Heinonen, M.; Bell, S.; Choi, B. Il; Cortellessa, G.; Fernicola, V.; Georgin, E.; Hudoklin, D.; Ionescu, G. V.; Ismail, N.; Keawprasert, T.; Krasheninina, M.; Aro, R.; Nielsen, J.; Oğuz Aytekin, S.; Österberg, P.; Skabar, J.; Strnad, R.
2018-01-01
A European research project METefnet addresses a fundamental obstacle to improving energy-intensive drying process control: due to ambiguous reference analysis methods and insufficient methods for estimating uncertainty in moisture measurements, the achievable accuracy in the past was limited and measurement uncertainties were largely unknown. This paper reports the developments in METefnet that provide a sound basis for the SI traceability: four new primary standards for realizing the water mass fraction were set up, analyzed and compared to each other. The operation of these standards is based on combining sample weighing with different water vapor detection techniques: cold trap, chilled mirror, electrolytic and coulometric Karl Fischer titration. The results show that an equivalence of 0.2 % has been achieved between the water mass fraction realizations and that the developed methods are applicable to a wide range of materials.
Performance of the ATLAS muon trigger in pp collisions at √s = 8 TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.
The performance of the ATLAS muon trigger system is evaluated with proton–proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum 25 < p T < 100 GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The pT range for efficiency determination is extended by using muons from decays of J/ψ mesons, W bosons,more » and top quarks. The muon trigger shows highly uniform and stable performance. Thus, the performance is compared to the prediction of a detailed simulation.« less
Performance of the ATLAS muon trigger in pp collisions at √s = 8 TeV
Aad, G.
2015-03-13
The performance of the ATLAS muon trigger system is evaluated with proton–proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum 25 < p T < 100 GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The pT range for efficiency determination is extended by using muons from decays of J/ψ mesons, W bosons,more » and top quarks. The muon trigger shows highly uniform and stable performance. Thus, the performance is compared to the prediction of a detailed simulation.« less
Conroy, M.J.; Runge, M.C.; Nichols, J.D.; Stodola, K.W.; Cooper, R.J.
2011-01-01
The broad physical and biological principles behind climate change and its potential large scale ecological impacts on biota are fairly well understood, although likely responses of biotic communities at fine spatio-temporal scales are not, limiting the ability of conservation programs to respond effectively to climate change outside the range of human experience. Much of the climate debate has focused on attempts to resolve key uncertainties in a hypothesis-testing framework. However, conservation decisions cannot await resolution of these scientific issues and instead must proceed in the face of uncertainty. We suggest that conservation should precede in an adaptive management framework, in which decisions are guided by predictions under multiple, plausible hypotheses about climate impacts. Under this plan, monitoring is used to evaluate the response of the system to climate drivers, and management actions (perhaps experimental) are used to confront testable predictions with data, in turn providing feedback for future decision making. We illustrate these principles with the problem of mitigating the effects of climate change on terrestrial bird communities in the southern Appalachian Mountains, USA. ?? 2010 Elsevier Ltd.
Sensitivities of seismic velocities to temperature, pressure and composition in the lower mantle
NASA Astrophysics Data System (ADS)
Trampert, Jeannot; Vacher, Pierre; Vlaar, Nico
2001-08-01
We calculated temperature, pressure and compositional sensitivities of seismic velocities in the lower mantle using latest mineral physics data. The compositional variable refers to the volume proportion of perovskite in a simplified perovskite-magnesiowüstite mantle assemblage. The novelty of our approach is the exploration of a reasonable range of input parameters which enter the lower mantle extrapolations. This leads to realistic error bars on the sensitivities. Temperature variations can be inferred throughout the lower mantle within a good degree of precision. Contrary to the uppermost mantle, modest compositional changes in the lower mantle can be detected by seismic tomography, with a larger uncertainty though. A likely trade-off between temperature and composition will be largely determined by uncertainties in tomography itself. Given current sources of uncertainties on recent data, anelastic contributions to the temperature sensitivities (calculated using Karato's approach) appear less significant than previously thought. Recent seismological determinations of the ratio of relative S to P velocity heterogeneity can be entirely explain by thermal effects, although isolated spots beneath Africa and the Central Pacific in the lowermost mantle may ask for a compositional origin.
The efficiency of asset management strategies to reduce urban flood risk.
ten Veldhuis, J A E; Clemens, F H L R
2011-01-01
In this study, three asset management strategies were compared with respect to their efficiency to reduce flood risk. Data from call centres at two municipalities were used to quantify urban flood risks associated with three causes of urban flooding: gully pot blockage, sewer pipe blockage and sewer overloading. The efficiency of three flood reduction strategies was assessed based on their effect on the causes contributing to flood risk. The sensitivity of the results to uncertainty in the data source, citizens' calls, was analysed through incorporation of uncertainty ranges taken from customer complaint literature. Based on the available data it could be shown that increasing gully pot blockage is the most efficient action to reduce flood risk, given data uncertainty. If differences between cause incidences are large, as in the presented case study, call data are sufficient to decide how flood risk can be most efficiently reduced. According to the results of this analysis, enlargement of sewer pipes is not an efficient strategy to reduce flood risk, because flood risk associated with sewer overloading is small compared to other failure mechanisms.
Milanović, Jovica V
2017-08-13
Future power systems will be significantly different compared with their present states. They will be characterized by an unprecedented mix of a wide range of electricity generation and transmission technologies, as well as responsive and highly flexible demand and storage devices with significant temporal and spatial uncertainty. The importance of probabilistic approaches towards power system stability analysis, as a subsection of power system studies routinely carried out by power system operators, has been highlighted in previous research. However, it may not be feasible (or even possible) to accurately model all of the uncertainties that exist within a power system. This paper describes for the first time an integral approach to probabilistic stability analysis of power systems, including small and large angular stability and frequency stability. It provides guidance for handling uncertainties in power system stability studies and some illustrative examples of the most recent results of probabilistic stability analysis of uncertain power systems.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Isotope-abundance variations of selected elements (IUPAC technical report)
Coplen, T.B.; Böhlke, J.K.; De Bievre, P.; Ding, T.; Holden, N.E.; Hopple, J.A.; Krouse, H.R.; Lamberty, A.; Peiser, H.S.; Revesz, K.; Rieder, S.E.; Rosman, K.J.R.; Roth, E.; Taylor, P.D.P.; Vocke, R.D.; Xiao, Y.K.
2002-01-01
Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic-weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic-weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope-abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope-abundance variations potentially are large enough to result in future expansion of their atomic-weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic-weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio.
NASA Astrophysics Data System (ADS)
Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.
2018-05-01
A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm ± 0.18 nm, 204.39 nm ± 1.71 nm and 443.87 nm ± 1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.
NASA Astrophysics Data System (ADS)
Alfieri, J. G.
2012-12-01
In-situ observations are essential to a broad range of applications including the development, calibration, and validation of both the numerical and remote sensing-based models. For example, observational data is requisite in order to evaluate the skill of these models both to represent the complex biogeophysical processes regulating evapotranspiration (ET) and to predict the magnitude of the moisture flux. As such, by propagating into these subsequent activities, any uncertainty or errors associated with the observational data have the potential to adversely impact the accuracy and utility of these models. It is, therefore, critical that the factors driving measurement uncertainty are fully understood so that the steps can be taken to account for its effects and mitigate its impact on subsequent analyses. Field measurements of ET can be collected using a variety of techniques including eddy covariance (EC), lysimetry (LY), and scintillometry (SC). Each of these methods is underpinned by a unique set of theoretical considerations and practical constraints; and, as a result, each method is susceptible to differing types of systematic and random error. Since the uncertainty associated with the field measurements is predicated on how well numerous factors - for example, environmental conditions - adhere to those prescribed by the underlying assumptions, the quality of in-situ observations collected via the differing methods can vary significantly both over time and from site-to-site. Using data from both site studies and large field campaigns, such as IHOP_2002 and BEAREX08, the sources of uncertainty in field observations will be discussed. The impact of measurement uncertainty on model validation will also be illustrated.
Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation
NASA Astrophysics Data System (ADS)
Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.
2002-05-01
This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Unrealized Global Temperature Increase: Implications of Current Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Stephen E.
Unrealized increase in global mean surface air temperature (GMST) may result from the climate system not being in steady state with forcings and/or from cessation of negative aerosol forcing that would result from decreases in emissions. An observation-constrained method is applied to infer the dependence of Earth's climate sensitivity on forcing by anthropogenic aerosols within the uncertainty on that forcing given by the Fifth (2013) Assessment Report of the Intergovernmental Panel on Climate Change. Within these uncertainty ranges the increase in GMST due to temperature lag for future forcings held constant is slight (0.09–0.19 K over 20 years; 0.12–0.26 Kmore » over 100 years). However the incremental increase in GMST that would result from a hypothetical abrupt cessation of sources of aerosols could be quite large, but is highly uncertain, 0.1–1.3 K over 20 years. Decrease in CO2 abundance and forcing following abrupt cessation of emissions would offset these increases in GMST over 100 years by as little as 0.09 K to as much as 0.8 K. The uncertainties quantified here greatly limit confidence in projections of change in GMST that would result from any strategy for future reduction of emissions.« less
Understanding the origin of Paris Agreement emission uncertainties
NASA Astrophysics Data System (ADS)
Rogelj, Joeri; Fricko, Oliver; Meinshausen, Malte; Krey, Volker; Zilliacus, Johanna J. J.; Riahi, Keywan
2017-06-01
The UN Paris Agreement puts in place a legally binding mechanism to increase mitigation action over time. Countries put forward pledges called nationally determined contributions (NDC) whose impact is assessed in global stocktaking exercises. Subsequently, actions can then be strengthened in light of the Paris climate objective: limiting global mean temperature increase to well below 2 °C and pursuing efforts to limit it further to 1.5 °C. However, pledged actions are currently described ambiguously and this complicates the global stocktaking exercise. Here, we systematically explore possible interpretations of NDC assumptions, and show that this results in estimated emissions for 2030 ranging from 47 to 63 GtCO2e yr-1. We show that this uncertainty has critical implications for the feasibility and cost to limit warming well below 2 °C and further to 1.5 °C. Countries are currently working towards clarifying the modalities of future NDCs. We identify salient avenues to reduce the overall uncertainty by about 10 percentage points through simple, technical clarifications regarding energy accounting rules. Remaining uncertainties depend to a large extent on politically valid choices about how NDCs are expressed, and therefore raise the importance of a thorough and robust process that keeps track of where emissions are heading over time.
NASA Astrophysics Data System (ADS)
Tompkins, Lauren Alexandra
The first measurement of the inelastic cross-section for proton-proton collisions at a center of mass energy of 7 TeV using the ATLAS detector at the Large Hadron Collider is presented. From a dataset corresponding to an integrated luminosity of 20 inverse microbarns, events are selected by requiring activity in scintillation counters mounted in the forward region of the ATLAS detector. An inelastic cross-section of 60.1 +/- 2.1 millibarns is measured for the subset of events visible to the scintillation counters. The uncertainty includes the statistical and systematic uncertainty on the measurement. The visible events satisfy xi > 5 x 10 -6, where xi=MX 2/s is calculated from the invariant mass, MX, of hadrons selected using the largest rapidity gap in the event. For diffractive events this corresponds to requiring at least one of the dissociation masses to be larger than 15.7~GeV. Using an extrapolation dependent on the model for the differential diffractive mass distribution, an inelastic cross-section of 69.1 +/- 2.4 (exp) +/- 6.9 (extr) millibarns is determined, where (exp) indicates the experimental uncertainties and (extr) indicates the uncertainty due to the extrapolation from the limited xi-range to the full inelastic cross-section.
Unrealized Global Temperature Increase: Implications of Current Uncertainties
Schwartz, Stephen E.
2018-03-07
Unrealized increase in global mean surface air temperature (GMST) may result from the climate system not being in steady state with forcings and/or from cessation of negative aerosol forcing that would result from decreases in emissions. An observation-constrained method is applied to infer the dependence of Earth's climate sensitivity on forcing by anthropogenic aerosols within the uncertainty on that forcing given by the Fifth (2013) Assessment Report of the Intergovernmental Panel on Climate Change. Within these uncertainty ranges the increase in GMST due to temperature lag for future forcings held constant is slight (0.09–0.19 K over 20 years; 0.12–0.26 Kmore » over 100 years). However the incremental increase in GMST that would result from a hypothetical abrupt cessation of sources of aerosols could be quite large, but is highly uncertain, 0.1–1.3 K over 20 years. Decrease in CO2 abundance and forcing following abrupt cessation of emissions would offset these increases in GMST over 100 years by as little as 0.09 K to as much as 0.8 K. The uncertainties quantified here greatly limit confidence in projections of change in GMST that would result from any strategy for future reduction of emissions.« less
Understanding the origin of Paris Agreement emission uncertainties
Rogelj, Joeri; Fricko, Oliver; Meinshausen, Malte; Krey, Volker; Zilliacus, Johanna J. J.; Riahi, Keywan
2017-01-01
The UN Paris Agreement puts in place a legally binding mechanism to increase mitigation action over time. Countries put forward pledges called nationally determined contributions (NDC) whose impact is assessed in global stocktaking exercises. Subsequently, actions can then be strengthened in light of the Paris climate objective: limiting global mean temperature increase to well below 2 °C and pursuing efforts to limit it further to 1.5 °C. However, pledged actions are currently described ambiguously and this complicates the global stocktaking exercise. Here, we systematically explore possible interpretations of NDC assumptions, and show that this results in estimated emissions for 2030 ranging from 47 to 63 GtCO2e yr−1. We show that this uncertainty has critical implications for the feasibility and cost to limit warming well below 2 °C and further to 1.5 °C. Countries are currently working towards clarifying the modalities of future NDCs. We identify salient avenues to reduce the overall uncertainty by about 10 percentage points through simple, technical clarifications regarding energy accounting rules. Remaining uncertainties depend to a large extent on politically valid choices about how NDCs are expressed, and therefore raise the importance of a thorough and robust process that keeps track of where emissions are heading over time. PMID:28585924
Understanding the origin of Paris Agreement emission uncertainties.
Rogelj, Joeri; Fricko, Oliver; Meinshausen, Malte; Krey, Volker; Zilliacus, Johanna J J; Riahi, Keywan
2017-06-06
The UN Paris Agreement puts in place a legally binding mechanism to increase mitigation action over time. Countries put forward pledges called nationally determined contributions (NDC) whose impact is assessed in global stocktaking exercises. Subsequently, actions can then be strengthened in light of the Paris climate objective: limiting global mean temperature increase to well below 2 °C and pursuing efforts to limit it further to 1.5 °C. However, pledged actions are currently described ambiguously and this complicates the global stocktaking exercise. Here, we systematically explore possible interpretations of NDC assumptions, and show that this results in estimated emissions for 2030 ranging from 47 to 63 GtCO 2 e yr -1 . We show that this uncertainty has critical implications for the feasibility and cost to limit warming well below 2 °C and further to 1.5 °C. Countries are currently working towards clarifying the modalities of future NDCs. We identify salient avenues to reduce the overall uncertainty by about 10 percentage points through simple, technical clarifications regarding energy accounting rules. Remaining uncertainties depend to a large extent on politically valid choices about how NDCs are expressed, and therefore raise the importance of a thorough and robust process that keeps track of where emissions are heading over time.
Tsao, C-C; Campbell, J E; Mena-Carrasco, M; Spak, S N; Carmichael, G R; Chen, Y
2012-10-02
Although biofuels present an opportunity for renewable energy production, significant land-use change resulting from biofuels may contribute to negative environmental, economic, and social impacts. Here we examined non-GHG air pollution impacts from both indirect and direct land-use change caused by the anticipated expansion of Brazilian biofuels production. We synthesized information on fuel loading, combustion completeness, and emission factors, and developed a spatially explicit approach with uncertainty and sensitivity analyses to estimate air pollution emissions. The land-use change emissions, ranging from 6.7 to 26.4 Tg PM(2.5), were dominated by deforestation burning practices associated with indirect land-use change. We also found Brazilian sugar cane ethanol and soybean biodiesel including direct and indirect land-use change effects have much larger life-cycle emissions than conventional fossil fuels for six regulated air pollutants. The emissions magnitude and uncertainty decrease with longer life-cycle integration periods. Results are conditional to the single LUC scenario employed here. After LUC uncertainty, the largest source of uncertainty in LUC emissions stems from the combustion completeness during deforestation. While current biofuels cropland burning policies in Brazil seek to reduce life-cycle emissions, these policies do not address the large emissions caused by indirect land-use change.
A probabilistic framework for single-station location of seismicity on Earth and Mars
NASA Astrophysics Data System (ADS)
Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.
2017-01-01
Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.
NASA Astrophysics Data System (ADS)
Zhang, H. F.; Chen, B. Z.; van der Laan-Luijkx, I. T.; Machida, T.; Matsueda, H.; Sawa, Y.; Fukuyama, Y.; Labuschagne, C.; Langenfelds, R.; van der Schoot, M.; Xu, G.; Yan, J. W.; Zhou, L. X.; Tans, P. P.; Peters, W.
2013-10-01
Current estimates of the terrestrial carbon fluxes in Asia ("Asia" refers to lands as far west as the Urals and is divided into Boreal Eurasia, Temperate Eurasia and tropical Asia based on TransCom regions) show large uncertainties particularly in the boreal and mid-latitudes and in China. In this paper, we present an updated carbon flux estimate for Asia by introducing aircraft CO2 measurements from the CONTRAIL (Comprehensive Observation Network for Trace gases by Airline) program into an inversion modeling system based on the CarbonTracker framework. We estimated the averaged annual total Asian terrestrial land CO2 sink was about -1.56 Pg C yr-1 over the period 2006-2010, which offsets about one-third of the fossil fuel emission from Asia (+4.15 Pg C yr-1). The uncertainty of the terrestrial uptake estimate was derived from a set of sensitivity tests and ranged from -1.07 to -1.80 Pg C yr-1, comparable to the formal Gaussian error of ±1.18 Pg C yr-1 (1-sigma). The largest sink was found in forests, predominantly in coniferous forests (-0.64 Pg C yr-1) and mixed forests (-0.14 Pg C yr-1); and the second and third large carbon sinks were found in grass/shrub lands and crop lands, accounting for -0.44 Pg C yr-1 and -0.20 Pg C yr-1, respectively. The peak-to-peak amplitude of inter-annual variability (IAV) was 0.57 Pg C yr-1 ranging from -1.71 Pg C yr-1 to -2.28 Pg C yr-1. The IAV analysis reveals that the Asian CO2 sink was sensitive to climate variations, with the lowest uptake in 2010 concurrent with summer flood/autumn drought and the largest CO2 sink in 2009 owing to favorable temperature and plentiful precipitation conditions. We also found the inclusion of the CONTRAIL data in the inversion modeling system reduced the uncertainty by 11% over the whole Asian region, with a large reduction in the southeast of Boreal Eurasia, southeast of Temperate Eurasia and most Tropical Asian areas.
SU-E-T-98: An Analysis of TG-51 Electron Beam Calibration Correction Factor Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, P; Alvarez, P; Taylor, P
Purpose: To analyze the uncertainty of the TG-51 electron beam calibration correction factors for farmer type ion chambers currently used by institutions visited by IROC Houston. Methods: TG-51 calibration data were collected from 181 institutions visited by IROC Houston physicists for 1174 and 197 distinct electron beams from modern Varian and Elekta accelerators, respectively. Data collected and analyzed included ion chamber make and model, nominal energy, N{sub D,w}, I{sub 50}, R{sub 50}, k’R{sub 50}, d{sub ref}, P{sub gr} and pdd(d{sub ref}). k’R{sub 50} data for parallel plate chambers were excluded from the analysis. Results: Unlike photon beams, electron nominal energymore » is a poor indicator of the actual energy as evidenced by the range of R{sub 50} values for each electron beam energy (6–22MeV). The large range in R{sub 50} values resulted k’R{sub 50} values with a small standard deviation but large range between maximum value used and minimum value (0.001–0.029) used for a specific Varian nominal energy. Varian data showed more variability in k’R{sub 50} values than the Elekta data (0.001–0.014). Using the observed range of R{sub 50} values, the maximum spread in k’R{sub 50} values was determined by IROC Houston and compared to the spread of k’R{sub 50} values used in the community. For Elekta linacs the spreads were equivalent, but for Varian energies of 6 to 16MeV, the community spread was 2 to 6 times larger. Community P{sub gr} values had a much larger range of values for 6 and 9 MeV values than predicted. The range in Varian pdd(d{sub ref} ) used by the community for low energies was large, (1.4–4.9 percent), when it should have been very close to unity. Exradin, PTW Roos and PTW farmer chambers N{sub D,w} values showed the largest spread, ≥11 percent. Conclusion: While the vast majority of electron beam calibration correction factors used are accurate, there is a surprising spread in some of the values used.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Lott, Casey A; Wiley, Robert L; Fischer, Richard A; Hartfield, Paul D; Scott, J Michael
2013-01-01
Interior Least Terns (Sternula antillarum) (ILT) are colonial, fish-eating birds that breed within active channels of large sand bed rivers of the Great Plains and in the Lower Mississippi Valley. Multipurpose dams, irrigation structures, and engineered navigation systems have been present on these rivers for many decades. Despite severe alteration of channels and flow regimes, regulation era floods have remained effective at maintaining bare sandbar nesting habitat on many river segments and ILT populations have been stable or expanding since they were listed as endangered in 1985. We used ILT breeding colony locations from 2002 to 2012 and dispersal information to identify 16 populations and 48 subpopulations. More than 90% of ILT and >83% of river km with suitable nesting habitat occur within the two largest populations. However, replicate populations remain throughout the entire historical, geophysical, and ecological range of ILT. Rapid colonization of anthropogenic habitats in areas that were not historically occupied suggests metapopulation dynamics. The highest likelihood of demographic connectivity among ILT populations occurs across the Southern Plains and the Lower Mississippi River, which may be demographically connected with Least Tern populations on the Gulf Coast. Paired ecological and bird population models are needed to test whether previously articulated threats limit ILT population growth and to determine if management intervention is necessary and where. Given current knowledge, the largest sources of model uncertainty will be: (1) uncertainty in relationships between high flow events and subsequent sandbar characteristics and (2) uncertainty regarding the frequency of dispersal among population subunits. We recommend research strategies to reduce these uncertainties. PMID:24223295
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
NASA Technical Reports Server (NTRS)
Liu, Junjie; Bowman, Kevin W.; Lee, Memong; Henze, David K.; Bousserez, Nicolas; Brix, Holger; Collatz, G. James; Menemenlis, Dimitris; Ott, Lesley; Pawson, Steven;
2014-01-01
Using an Observing System Simulation Experiment (OSSE), we investigate the impact of JAXA Greenhouse gases Observing SATellite 'IBUKI' (GOSAT) sampling on the estimation of terrestrial biospheric flux with the NASA Carbon Monitoring System Flux (CMS-Flux) estimation and attribution strategy. The simulated observations in the OSSE use the actual column carbon dioxide (X(CO2)) b2.9 retrieval sensitivity and quality control for the year 2010 processed through the Atmospheric CO2 Observations from Space algorithm. CMS-Flux is a variational inversion system that uses the GEOS-Chem forward and adjoint model forced by a suite of observationally constrained fluxes from ocean, land and anthropogenic models. We investigate the impact of GOSAT sampling on flux estimation in two aspects: 1) random error uncertainty reduction and 2) the global and regional bias in posterior flux resulted from the spatiotemporally biased GOSAT sampling. Based on Monte Carlo calculations, we find that global average flux uncertainty reduction ranges from 25% in September to 60% in July. When aggregated to the 11 land regions designated by the phase 3 of the Atmospheric Tracer Transport Model Intercomparison Project, the annual mean uncertainty reduction ranges from 10% over North American boreal to 38% over South American temperate, which is driven by observational coverage and the magnitude of prior flux uncertainty. The uncertainty reduction over the South American tropical region is 30%, even with sparse observation coverage. We show that this reduction results from the large prior flux uncertainty and the impact of non-local observations. Given the assumed prior error statistics, the degree of freedom for signal is approx.1132 for 1-yr of the 74 055 GOSAT X(CO2) observations, which indicates that GOSAT provides approx.1132 independent pieces of information about surface fluxes. We quantify the impact of GOSAT's spatiotemporally sampling on the posterior flux, and find that a 0.7 gigatons of carbon bias in the global annual posterior flux resulted from the seasonally and diurnally biased sampling when using a diagonal prior flux error covariance.
NASA Astrophysics Data System (ADS)
Reusch, D. B.
2016-12-01
Any analysis that wants to use a GCM-based scenario of future climate benefits from knowing how much uncertainty the GCM's inherent variability adds to the development of climate change predictions. This is extra relevant in the polar regions due to the potential of global impacts (e.g., sea level rise) from local (ice sheet) climate changes such as more frequent/intense surface melting. High-resolution, regional-scale models using GCMs for boundary/initial conditions in future scenarios inherit a measure of GCM-derived externally-driven uncertainty. We investigate these uncertainties for the Greenland ice sheet using the 30-member CESM1.0-CAM5-BGC Large Ensemble (CESMLE) for recent (1981-2000) and future (2081-2100, RCP 8.5) decades. Recent simulations are skill-tested against the ERA-Interim reanalysis and AWS observations with results informing future scenarios. We focus on key variables influencing surface melting through decadal climatologies, nonlinear analysis of variability with self-organizing maps (SOMs), regional-scale modeling (Polar WRF), and simple melt models. Relative to the ensemble average, spatially averaged climatological July temperature anomalies over a Greenland ice-sheet/ocean domain are mostly between +/- 0.2 °C. The spatial average hides larger local anomalies of up to +/- 2 °C. The ensemble average itself is 2 °C cooler than ERA-Interim. SOMs extend our diagnostics by providing a concise, objective summary of model variability as a set of generalized patterns. For CESMLE, the SOM patterns summarize the variability of multiple realizations of climate. Changes in pattern frequency by ensemble member show the influence of initial conditions. For example, basic statistical analysis of pattern frequency yields interquartile ranges of 2-4% for individual patterns across the ensemble. In climate terms, this tells us about climate state variability through the range of the ensemble, a potentially significant source of melt-prediction uncertainty. SOMs can also capture the different trajectories of climate due to intramodel variability over time. Polar WRF provides higher resolution regional modeling with improved, polar-centric model physics. Simple melt models allow us to characterize impacts of the upstream uncertainties on estimates of surface melting.
How good a clock is rotation? The stellar rotation-mass-age relationship for old field stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, Courtney R.; Pinsonneault, Marc H., E-mail: epstein@astronomy.ohio-state.edu, E-mail: pinsono@astronomy.ohio-state.edu
2014-01-10
The rotation-mass-age relationship offers a promising avenue for measuring the ages of field stars, assuming the attendant uncertainties to this technique can be well characterized. We model stellar angular momentum evolution starting with a rotation distribution from open cluster M37. Our predicted rotation-mass-age relationship shows significant zero-point offsets compared to an alternative angular momentum loss law and published gyrochronology relations. Systematic errors at the 30% level are permitted by current data, highlighting the need for empirical guidance. We identify two fundamental sources of uncertainty that limit the precision of rotation-based ages and quantify their impact. Stars are born with amore » range of rotation rates, which leads to an age range at fixed rotation period. We find that the inherent ambiguity from the initial conditions is important for all young stars, and remains large for old stars below 0.6 M {sub ☉}. Latitudinal surface differential rotation also introduces a minimum uncertainty into rotation period measurements and, by extension, rotation-based ages. Both models and the data from binary star systems 61 Cyg and α Cen demonstrate that latitudinal differential rotation is the limiting factor for rotation-based age precision among old field stars, inducing uncertainties at the ∼2 Gyr level. We also examine the relationship between variability amplitude, rotation period, and age. Existing ground-based surveys can detect field populations with ages as old as 1-2 Gyr, while space missions can detect stars as old as the Galactic disk. In comparison with other techniques for measuring the ages of lower main sequence stars, including geometric parallax and asteroseismology, rotation-based ages have the potential to be the most precise chronometer for 0.6-1.0 M {sub ☉} stars.« less
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality
Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-01-01
Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only extreme Z/X values yielded envelope helium abundance outside this range. We found that other current uncertainties, namely, in the solar age and luminosity, in nuclear rates other than the pp reaction, in the low-temperature molecular opacities, and in the low-density equation of state, have no significant effect on the quantities that can be inferred from helioseismic observations. The predicted pre-main-sequence lithium depletion is uncertain by a factor of 2. The predicted neutrino capture rate is uncertain by approximately 30% for the Cl-27 experiment and by approximately 3% for Ga-71 experiments, while the B-8 neutrino flux is uncertain by approximately 30%.
NASA Astrophysics Data System (ADS)
Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir
2017-06-01
We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Pechlivanidis, Illias; Breuer, Lutz; Wortmann, Michel; Vetter, Tobias; Flörke, Martina; Chamorro, Alejandro; Schäfer, David; Shah, Harsh; Zeng, Xiaofan
2016-04-01
The quantification of the predictive uncertainty in hydrologic models and their attribution to its main sources is of particular interest in climate change studies. In recent years, a number of studies have been aimed at assessing the ability of hydrologic models (HMs) to reproduce extreme hydrologic events. Disentangling the overall uncertainty of streamflow -including its derived low-flow characteristics- into individual contributions, stemming from forcings and model structure, has also been studied. Based on recent literature, it can be stated that there is a controversy with respect to which source is the largest (e.g., Teng, et al. 2012, Bosshard et al. 2013, Prudhomme et al. 2014). Very little has also been done to estimate the relative impact of the parametric uncertainty of the HMs with respect to overall uncertainty of low-flow characteristics. The ISI-MIP2 project provides a unique opportunity to understand the propagation of forcing and model structure uncertainties into century-long time series of drought characteristics. This project defines a consistent framework to deal with compatible initial conditions for the HMs and a set of standardized historical and future forcings. Moreover, the ensemble of hydrologic model predictions varies across a broad range of climate scenarios and regions. To achieve this goal, we use six preconditioned hydrologic models (HYPE or HBV, mHM, SWIM, VIC, and WaterGAP3) set up in seven large continental river basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine, Yellow. These models are forced with bias-corrected outputs of five CMIP5 general circulation models (GCM) under four extreme representative concentration pathway (RCP) scenarios (i.e. 2.6, 4.5, 6.0, and 8.5 Wm-2) for the period 1971-2099. Simulated streamflow is transformed into a monthly runoff index (RI) to analyze the attribution of the GCM and HM uncertainty into drought magnitude and duration over time. Uncertainty contributions are investigated during periods: 1) 2006-2035, 2) 2036-2065 and 3) 2070-2099. Results presented in Samaniego et al. 2015 (submitted) indicate that GCM uncertainty mostly dominates over HM uncertainty for predictions of runoff drought characteristics, irrespective of the selected RCP and region. For the mHM model, in particular, GCM uncertainty always dominates over parametric uncertainty. In general, the overall uncertainty increases with time. The larger the radiative forcing of the RCP, the larger the uncertainty in drought characteristics, however, the propagation of the GCM uncertainty onto a drought characteristic depends largely upon the hydro-climatic regime. While our study emphasizes the need for multi-model ensembles for the assessment of future drought projections, the agreement between GCM forcings is still weak to draw conclusive recommendations. References: L. Samaniego, R. Kumar, I. G. Pechlivanidis, L. Breuer, M. Wortmann, T. Vetter, M. Flörke, A. Chamorro, D. Schäfer, H. Shah, X. Zeng: Propagation of forcing and model uncertainty into hydrological drought characteristics in a multi-model century-long experiment in continental river basins. Submitted to Climatic Change on Dec 2015. Bosshard, et al. 2013. doi:10.1029/2011WR011533. Prudhomme et al. 2014, doi:10.1073/pnas.1222473110. Teng, et al. 2012, doi:10.1175/JHM-D-11-058.1.
Model Sensitivity Studies of the Decrease in Atmospheric Carbon Tetrachloride
NASA Technical Reports Server (NTRS)
Chipperfield, Martyn P.; Liang, Qing; Rigby, Matt; Hossaini, Ryan; Montzka, Stephen A.; Dhomse, Sandip; Feng, Wuhu; Prinn, Ronald G.; Weiss, Ray F.; Harth, Christina M.;
2016-01-01
Carbon tetrachloride (CCl4) is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D) chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74% of total), but a reported 10% uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9%of total) is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17%of total) has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years). With an assumed CCl4 emission rate of 39 Gg year(exp -1), the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay) over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year(exp -1). Further progress in constraining the CCl4 budget is partly limited by systematic biases between observational datasets. For example, surface observations from the National Oceanic and Atmospheric Administration (NOAA) network are larger than from the Advanced Global Atmospheric Gases Experiment (AGAGE) network but have shown a steeper decreasing trend over the past 2 decades. These differences imply a difference in emissions which is significant relative to uncertainties in the magnitudes of the CCl4 sinks.
Model sensitivity studies of the decrease in atmospheric carbon tetrachloride
NASA Astrophysics Data System (ADS)
Chipperfield, Martyn P.; Liang, Qing; Rigby, Matthew; Hossaini, Ryan; Montzka, Stephen A.; Dhomse, Sandip; Feng, Wuhu; Prinn, Ronald G.; Weiss, Ray F.; Harth, Christina M.; Salameh, Peter K.; Mühle, Jens; O'Doherty, Simon; Young, Dickon; Simmonds, Peter G.; Krummel, Paul B.; Fraser, Paul J.; Steele, L. Paul; Happell, James D.; Rhew, Robert C.; Butler, James; Yvon-Lewis, Shari A.; Hall, Bradley; Nance, David; Moore, Fred; Miller, Ben R.; Elkins, James W.; Harrison, Jeremy J.; Boone, Chris D.; Atlas, Elliot L.; Mahieu, Emmanuel
2016-12-01
Carbon tetrachloride (CCl4) is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D) chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total), but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total) is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total) has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years). With an assumed CCl4 emission rate of 39 Gg year-1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay) over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year-1. Further progress in constraining the CCl4 budget is partly limited by systematic biases between observational datasets. For example, surface observations from the National Oceanic and Atmospheric Administration (NOAA) network are larger than from the Advanced Global Atmospheric Gases Experiment (AGAGE) network but have shown a steeper decreasing trend over the past 2 decades. These differences imply a difference in emissions which is significant relative to uncertainties in the magnitudes of the CCl4 sinks.
A method for acquiring random range uncertainty probability distributions in proton therapy
NASA Astrophysics Data System (ADS)
Holloway, S. M.; Holloway, M. D.; Thomas, S. J.
2018-01-01
In treatment planning we depend upon accurate knowledge of geometric and range uncertainties. If the uncertainty model is inaccurate then the plan will produce under-dosing of the target and/or overdosing of OAR. We aim to provide a method for which centre and site-specific population range uncertainty due to inter-fraction motion can be quantified to improve the uncertainty model in proton treatment planning. Daily volumetric MVCT data from previously treated radiotherapy patients has been used to investigate inter-fraction changes to water equivalent path-length (WEPL). Daily image-guidance scans were carried out for each patient and corrected for changes in CTV position (using rigid transformations). An effective depth algorithm was used to determine residual range changes, after corrections had been applied, throughout the treatment by comparing WEPL within the CTV at each fraction for several beam angles. As a proof of principle this method was used to quantify uncertainties for inter-fraction range changes for a sample of head and neck patients of Σ=3.39 mm, σ = 4.72 mm and overall mean = -1.82 mm. For prostate Σ=5.64 mm, σ = 5.91 mm and overall mean = 0.98 mm. The choice of beam angle for head and neck did not affect the inter-fraction range error significantly; however this was not the same for prostate. Greater range changes were seen using a lateral beam compared to an anterior beam for prostate due to relative motion of the prostate and femoral heads. A method has been developed to quantify population range changes due to inter-fraction motion that can be adapted for the clinic. The results of this work highlight the importance of robust planning and analysis in proton therapy. Such information could be used in robust optimisation algorithms or treatment plan robustness analysis. Such knowledge will aid in establishing beam start conditions at planning and for establishing adaptive planning protocols.
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
Are You Sure? The Role of Uncertainty in Career
ERIC Educational Resources Information Center
Trevor-Roberts, Edwin
2006-01-01
Although uncertainty is a fundamental human experience, professionals in the career field have largely overlooked the role that it plays in people's careers. The changed nature of careers has resulted in people experiencing increased uncertainty in their career that is beyond the uncertainty experienced in their job. The author explores the role…
NASA Astrophysics Data System (ADS)
De Lucas, Javier; Segovia, José Juan
2018-05-01
Blackbody cavities are the standard radiation sources widely used in the fields of radiometry and radiation thermometry. Its effective emissivity and uncertainty depend to a large extent on the temperature gradient. An experimental procedure based on the radiometric method for measuring the gradient is followed. Results are applied to particular blackbody configurations where gradients can be thermometrically estimated by contact thermometers and where the relationship between both basic methods can be established. The proposed procedure may be applied to commercial blackbodies if they are modified allowing secondary contact temperature measurement. In addition, the established systematic may be incorporated as part of the actions for quality assurance in routine calibrations of radiation thermometers, by using the secondary contact temperature measurement for detecting departures from the real radiometrically obtained gradient and the effect on the uncertainty. On the other hand, a theoretical model is proposed to evaluate the effect of temperature variations on effective emissivity and associated uncertainty. This model is based on a gradient sample chosen following plausible criteria. The model is consistent with the Monte Carlo method for calculating the uncertainty of effective emissivity and complements others published in the literature where uncertainty is calculated taking into account only geometrical variables and intrinsic emissivity. The mathematical model and experimental procedure are applied and validated using a commercial type three-zone furnace, with a blackbody cavity modified to enable a secondary contact temperature measurement, in the range between 400 °C and 1000 °C.
Protein flexibility: coordinate uncertainties and interpretation of structural differences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashin, Alexander A., E-mail: alexander-rashin@hotmail.com; LH Baker Center for Bioinformatics and Department of Biochemistry, Biophysics and Molecular Biology, 112 Office and Lab Building, Iowa State University, Ames, IA 50011-3020; Rashin, Abraham H. L.
2009-11-01
Criteria for the interpretability of coordinate differences and a new method for identifying rigid-body motions and nonrigid deformations in protein conformational changes are developed and applied to functionally induced and crystallization-induced conformational changes. Valid interpretations of conformational movements in protein structures determined by X-ray crystallography require that the movement magnitudes exceed their uncertainty threshold. Here, it is shown that such thresholds can be obtained from the distance difference matrices (DDMs) of 1014 pairs of independently determined structures of bovine ribonuclease A and sperm whale myoglobin, with no explanations provided for reportedly minor coordinate differences. The smallest magnitudes of reportedly functionalmore » motions are just above these thresholds. Uncertainty thresholds can provide objective criteria that distinguish between true conformational changes and apparent ‘noise’, showing that some previous interpretations of protein coordinate changes attributed to external conditions or mutations may be doubtful or erroneous. The use of uncertainty thresholds, DDMs, the newly introduced CDDMs (contact distance difference matrices) and a novel simple rotation algorithm allows a more meaningful classification and description of protein motions, distinguishing between various rigid-fragment motions and nonrigid conformational deformations. It is also shown that half of 75 pairs of identical molecules, each from the same asymmetric crystallographic cell, exhibit coordinate differences that range from just outside the coordinate uncertainty threshold to the full magnitude of large functional movements. Thus, crystallization might often induce protein conformational changes that are comparable to those related to or induced by the protein function.« less
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
Nowcasting of rainfall and of combined sewage flow in urban drainage systems.
Achleitner, Stefan; Fach, Stefan; Einfalt, Thomas; Rauch, Wolfgang
2009-01-01
Nowcasting of rainfall may be used additionally to online rain measurements to optimize the operation of urban drainage systems. Uncertainties quoted for the rain volume are in the range of 5% to 10% mean square error (MSE), where for rain intensities 45% to 75% MSE are noted. For larger forecast periods up to 3 hours, the uncertainties will increase up to some hundred percents. Combined with the growing number of real time control concepts in sewer systems, rainfall forecast is used more and more in urban drainage systems. Therefore it is of interest how the uncertainties influence the final evaluation of a defined objective function. Uncertainty levels associated with the forecast itself are not necessarily transferable to resulting uncertainties in the catchment's flow dynamics. The aim of this paper is to analyse forecasts of rainfall and specific sewer output variables. For this study the combined sewer system of the city of Linz in the northern part of Austria located on the Danube has been selected. The city itself represents a total area of 96 km2 with 39 municipalities connected. It was found that the available weather radar data leads to large deviations in the forecast for precipitation at forecast horizons larger than 90 minutes. The same is true for sewer variables such a CSO overflow for small sub-catchments. Although the results improve for larger spatial scales, acceptable levels at forecast horizons larger than 90 minutes are not reached.
Length and Dimensional Measurements at NIST
Swyt, Dennis A.
2001-01-01
This paper discusses the past, present, and future of length and dimensional measurements at NIST. It covers the evolution of the SI unit of length through its three definitions and the evolution of NBS-NIST dimensional measurement from early linescales and gage blocks to a future of atom-based dimensional standards. Current capabilities include dimensional measurements over a range of fourteen orders of magnitude. Uncertainties of measurements on different types of material artifacts range down to 7×10−8 m at 1 m and 8 picometers (pm) at 300 pm. Current work deals with a broad range of areas of dimensional metrology. These include: large-scale coordinate systems; complex form; microform; surface finish; two-dimensional grids; optical, scanning-electron, atomic-force, and scanning-tunneling microscopies; atomic-scale displacement; and atom-based artifacts. PMID:27500015
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.
Practical problems in aggregating expert opinions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booker, J.M.; Picard, R.R.; Meyer, M.A.
1993-11-01
Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough aboutmore » the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.« less
NASA Astrophysics Data System (ADS)
Owens, Mathew J.; Riley, Pete
2017-11-01
Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).
Owens, Mathew J; Riley, Pete
2017-11-01
Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).
Riley, Pete
2017-01-01
Abstract Long lead‐time space‐weather forecasting requires accurate prediction of the near‐Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near‐Sun solar wind and magnetic field conditions provide the inner boundary condition to three‐dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics‐based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near‐Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near‐Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near‐Sun solar wind speed at a range of latitudes about the sub‐Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun‐Earth line. Propagating these conditions to Earth by a three‐dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one‐dimensional “upwind” scheme is used. The variance in the resulting near‐Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996–2016, the upwind ensemble is found to provide a more “actionable” forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large). PMID:29398982
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Schnorbus, M.; Werner, A. T.; Music, B.; Caya, D.; Rodenhuis, D. R.
2009-12-01
Uncertainties in the projections of future hydrologic change can be assessed using a suite of tools, thereby allowing researchers to focus on improvement to identifiable sources of uncertainty. A pareto set of optimal hydrologic parameterizations was run for three BC watersheds (Fraser, Peace and Columbia) for a range of downscaled Global Climate Model (GCM) emission scenarios to illustrate the uncertainty in hydrologic response to climate change. Results show varying responses of hydrologic regimes across geographic landscapes. Uncertainties in streamflow and water balance (runoff, evapo-transpiration, snow water equivalent, soil moisture) were analysed by forcing the Variable Infiltration Capacity (VIC) hydrologic model, run under twenty-five optimal parameter solution sets using six Bias-Corrected Statistically Downscaled (BCSD) GCM emission scenario projections for the 2050s and the 2080s. Projected changes by the 2050s include increased winter flows, increases and decreases in freshet magnitude depending on the scenario, and decreases in summer flows persisting until September. Winter runoff had the greatest range between GCM emission scenarios, while the hydrologic parameters within individual GCM emission scenarios had a winter runoff range an order of magnitude smaller. Evapo-transpiration, snow water equivalent and soil moisture exhibited a spread of ~10% or less. Streamflow changes by the 2080s lie outside the natural range of historic variability over the winter and spring. Results indicate that the changes projected between GCM emission scenarios are greater than the differences between the hydrologic model parameterizations. An alternate tool, the Canadian Regional Climate Model (CRCM) has been set up for these watersheds and various runs have been analysed to determine the range and variability present and to examine these results in comparison to the hydrologic model projections. The CRCM range and variability is an improvement over the Canadian GCM and thus requires less bias correction. However, without downscaling the CRCM results are still coarser than what is required to drive macroscale hydrologic models, such as VIC. Applying these tools has illustrated the importance of focusing on improved downscaling efforts, including downscaling CRCM results rather than CGCM data. Tools for decision-making in the face of uncertainty are emerging as a priority for the climate change impacts community, and there is a need to focus on incorporating uncertainty information along with the projection of impacts. Assessing uncertainty across a range of regimes and geographic regions can assist to identify the main sources of uncertainty and allow researchers to focus on improving those sources using more robust methodological approaches and tools.
Uncertainties in Past and Future Global Water Availability
NASA Astrophysics Data System (ADS)
Sheffield, J.; Kam, J.
2014-12-01
Understanding how water availability changes on inter-annual to decadal time scales and how it may change in the future under climate change are a key part of understanding future stresses on water and food security. Historic evaluations of water availability on regional to global scales are generally based on large-scale model simulations with their associated uncertainties, in particular for long-term changes. Uncertainties are due to model errors and missing processes, parameter uncertainty, and errors in meteorological forcing data. Recent multi-model inter-comparisons and impact studies have highlighted large differences for past reconstructions, due to different simplifying assumptions in the models or the inclusion of physical processes such as CO2 fertilization. Modeling of direct anthropogenic factors such as water and land management also carry large uncertainties in their physical representation and from lack of socio-economic data. Furthermore, there is little understanding of the impact of uncertainties in the meteorological forcings that underpin these historic simulations. Similarly, future changes in water availability are highly uncertain due to climate model diversity, natural variability and scenario uncertainty, each of which dominates at different time scales. In particular, natural climate variability is expected to dominate any externally forced signal over the next several decades. We present results from multi-land surface model simulations of the historic global availability of water in the context of natural variability (droughts) and long-term changes (drying). The simulations take into account the impact of uncertainties in the meteorological forcings and the incorporation of water management in the form of reservoirs and irrigation. The results indicate that model uncertainty is important for short-term drought events, and forcing uncertainty is particularly important for long-term changes, especially uncertainty in precipitation due to reduced gauge density in recent years. We also discuss uncertainties in future projections from these models as driven by bias-corrected and downscaled CMIP5 climate projections, in the context of the balance between climate model robustness and climate model diversity.
Planck 2013 results. VIII. HFI photometric calibration and mapmaking
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Filliard, C.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Lellouch, E.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Maurin, L.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rusholme, B.; Santos, D.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Techene, S.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper describes the methods used to produce photometrically calibrated maps from the Planck High Frequency Instrument (HFI) cleaned, time-ordered information. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best calibration accuracy over such a large range, two different photometric calibration schemes have to be used. The 545 and 857 GHz data are calibrated by comparing flux-density measurements of Uranus and Neptune with models of their atmospheric emission. The lower frequencies (below 353 GHz) are calibrated using the solar dipole. A component of this anisotropy is time-variable, owing to the orbital motion of the satellite in the solar system. Photometric calibration is thus tightly linked to mapmaking, which also addresses low-frequency noise removal. By comparing observations taken more than one year apart in the same configuration, we have identified apparent gain variations with time. These variations are induced by non-linearities in the read-out electronics chain. We have developed an effective correction to limit their effect on calibration. We present several methods to estimate the precision of the photometric calibration. We distinguish relative uncertainties (between detectors, or between frequencies) and absolute uncertainties. Absolute uncertainties lie in the range from 0.54% to 10% from 100 to 857 GHz. We describe the pipeline used to produce the maps from the HFI timelines, based on the photometric calibration parameters, and the scheme used to set the zero level of the maps a posteriori. We also discuss the cross-calibration between HFI and the SPIRE instrument on board Herschel. Finally we summarize the basic characteristics of the set of HFI maps included in the 2013 Planck data release.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagener, Thorsten; Mann, Michael; Crane, Robert
2014-04-29
This project focuses on uncertainty in streamflow forecasting under climate change conditions. The objective is to develop easy to use methodologies that can be applied across a range of river basins to estimate changes in water availability for realistic projections of climate change. There are three major components to the project: Empirical downscaling of regional climate change projections from a range of Global Climate Models; Developing a methodology to use present day information on the climate controls on the parameterizations in streamflow models to adjust the parameterizations under future climate conditions (a trading-space-for-time approach); and Demonstrating a bottom-up approach tomore » establishing streamflow vulnerabilities to climate change. The results reinforce the need for downscaling of climate data for regional applications, and further demonstrates the challenges of using raw GCM data to make local projections. In addition, it reinforces the need to make projections across a range of global climate models. The project demonstrates the potential for improving streamflow forecasts by using model parameters that are adjusted for future climate conditions, but suggests that even with improved streamflow models and reduced climate uncertainty through the use of downscaled data, there is still large uncertainty is the streamflow projections. The most useful output from the project is the bottom-up vulnerability driven approach to examining possible climate and land use change impacts on streamflow. Here, we demonstrate an inexpensive and easy to apply methodology that uses Classification and Regression Trees (CART) to define the climate and environmental parameters space that can produce vulnerabilities in the system, and then feeds in the downscaled projections to determine the probability top transitioning to a vulnerable sate. Vulnerabilities, in this case, are defined by the end user.« less
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
A Liquid Density Standard Over Wide Ranges of Temperature and Pressure Based on Toluene
McLinden, Mark O.; Splett, Jolene D.
2008-01-01
The density of liquid toluene has been measured over the temperature range −60 °C to 200 °C with pressures up to 35 MPa. A two-sinker hydrostatic-balance densimeter utilizing a magnetic suspension coupling provided an absolute determination of the density with low uncertainties. These data are the basis of NIST Standard Reference Material® 211d for liquid density over the temperature range −50 °C to 150 °C and pressure range 0.1 MPa to 30 MPa. A thorough uncertainty analysis is presented; this includes effects resulting from the experimental density determination, possible degradation of the sample due to time and exposure to high temperatures, dissolved air, uncertainties in the empirical density model, and the sample-to-sample variations in the SRM vials. Also considered is the effect of uncertainty in the temperature and pressure measurements. This SRM is intended for the calibration of industrial densimeters. PMID:27096111
NASA Astrophysics Data System (ADS)
Anderson, C. J.; Wildhaber, M. L.; Wikle, C. K.; Moran, E. H.; Franz, K. J.; Dey, R.
2012-12-01
Climate change operates over a broad range of spatial and temporal scales. Understanding the effects of change on ecosystems requires accounting for the propagation of information and uncertainty across these scales. For example, to understand potential climate change effects on fish populations in riverine ecosystems, climate conditions predicted by course-resolution atmosphere-ocean global climate models must first be translated to the regional climate scale. In turn, this regional information is used to force watershed models, which are used to force river condition models, which impact the population response. A critical challenge in such a multiscale modeling environment is to quantify sources of uncertainty given the highly nonlinear nature of interactions between climate variables and the individual organism. We use a hierarchical modeling approach for accommodating uncertainty in multiscale ecological impact studies. This framework allows for uncertainty due to system models, model parameter settings, and stochastic parameterizations. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. We use NARCCAP data to determine confidence the capability of climate models to simulate relevant processes and to quantify regional climate variability within the context of the hierarchical model of uncertainty quantification. By confidence, we mean the ability of the regional climate model to replicate observed mechanisms. We use the NCEP-driven simulations for this analysis. This provides a base from which regional change can be categorized as either a modification of previously observed mechanisms or emergence of new processes. The management implications for these categories of change are significantly different in that procedures to address impacts from existing processes may already be known and need adjustment; whereas, an emergent processes may require new management strategies. The results from hierarchical analysis of uncertainty are used to study the relative change in weights of the endangered Missouri River pallid sturgeon (Scaphirhynchus albus) under a 21st century climate scenario.
NASA Astrophysics Data System (ADS)
Goodwin, Philip; Brown, Sally; Haigh, Ivan David; Nicholls, Robert James; Matter, Juerg M.
2018-03-01
To avoid the most dangerous consequences of anthropogenic climate change, the Paris Agreement provides a clear and agreed climate mitigation target of stabilizing global surface warming to under 2.0°C above preindustrial, and preferably closer to 1.5°C. However, policy makers do not currently know exactly what carbon emissions pathways to follow to stabilize warming below these agreed targets, because there is large uncertainty in future temperature rise for any given pathway. This large uncertainty makes it difficult for a cautious policy maker to avoid either: (1) allowing warming to exceed the agreed target or (2) cutting global emissions more than is required to satisfy the agreed target, and their associated societal costs. This study presents a novel Adjusting Mitigation Pathway (AMP) approach to restrict future warming to policy-driven targets, in which future emissions reductions are not fully determined now but respond to future surface warming each decade in a self-adjusting manner. A large ensemble of Earth system model simulations, constrained by geological and historical observations of past climate change, demonstrates our self-adjusting mitigation approach for a range of climate stabilization targets ranging from 1.5°C to 4.5°C, and generates AMP scenarios up to year 2300 for surface warming, carbon emissions, atmospheric CO2, global mean sea level, and surface ocean acidification. We find that lower 21st century warming targets will significantly reduce ocean acidification this century, and will avoid up to 4 m of sea-level rise by year 2300 relative to a high-end scenario.
SU-E-J-125: Classification of CBCT Noises in Terms of Their Contribution to Proton Range Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brousmiche, S; Orban de Xivry, J; Macq, B
2014-06-01
Purpose: This study assesses the potential use of CBCT images in adaptive protontherapy by estimating the contribution of the main sources of noise and calibration errors to the proton range uncertainty. Methods: Measurements intended to highlight each particular source have been achieved by adapting either the testbench configuration, e.g. use of filtration, fan-beam collimation, beam stop arrays, phantoms and detector reset light, or the sequence of correction algorithms including water precorrection. Additional Monte-Carlo simulations have been performed to complement these measurements, especially for the beam hardening and the scatter cases. Simulations of proton beams penetration through the resulting images havemore » then been carried out to quantify the range change due to these effects. The particular case of a brain irradiation is considered mainly because of the multiple effects that the skull bones have on the internal soft tissues. Results: On top of the range error sources is the undercorrection of scatter. Its influence has been analyzed from a comparison of fan-beam and full axial FOV acquisitions. In this case, large range errors of about 12 mm can be reached if the assumption is made that the scatter has only a constant contribution over the projection images. Even the detector lag, which a priori induces a much smaller effect, has been shown to contribute for up to 2 mm to the overall error if its correction only aims at reducing the skin artefact. This last result can partially be understood by the larger interface between tissues and bones inside the skull. Conclusion: This study has set the basis of a more systematical analysis of the effect CBCT noise on range uncertainties based on a combination of measurements, simulations and theoretical results. With our method, even more subtle effects such as the cone-beam artifact or the detector lag can be assessed. SBR and JOR are financed by iMagX, a public-private partnership between the region Wallone of Belgium and IBA under convention #1217662.« less
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colucci, Janet E.; Bernstein, Rebecca A.; Cameron, Scott A.
2011-07-01
In this paper, we refine our method for the abundance analysis of high-resolution spectroscopy of the integrated light of unresolved globular clusters (GCs). This method was previously demonstrated for the analysis of old (>10 Gyr) Milky Way (MW) GCs. Here, we extend the technique to young clusters using a training set of nine GCs in the Large Magellanic Cloud. Depending on the signal-to-noise ratio of the data, we use 20-100 Fe lines per cluster to successfully constrain the ages of old clusters to within a {approx}5 Gyr range, the ages of {approx}2 Gyr clusters to a 1-2 Gyr range, andmore » the ages of the youngest clusters (0.05-1 Gyr) to a {approx}200 Myr range. We also demonstrate that we can measure [Fe/H] in clusters with any age less than 12 Gyr with similar or only slightly larger uncertainties (0.1-0.25 dex) than those obtained for old MW GCs (0.1 dex); the slightly larger uncertainties are due to the rapid evolution in stellar populations at these ages. In this paper, we present only Fe abundances and ages. In the next paper in this series, we present our complete analysis of {approx}20 elements for which we are able to measure abundances. For several of the clusters in this sample, there are no high-resolution abundances in the literature from individual member stars; our results are the first detailed chemical abundances available. The spectra used in this paper were obtained at Las Campanas with the echelle on the du Pont Telescope and with the MIKE spectrograph on the Magellan Clay Telescope.« less
Neutralizer Hollow Cathode Simulations and Comparisons with Ground Test Data
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Snyder, John S.; Goebel, Dan M.; Katz, Ira; Herman, Daniel A.
2009-01-01
The fidelity of electric propulsion physics-based models depends largely on the validity of their predictions over a range of operating conditions and geometries. In general, increased complexity of the physics requires more extensive comparisons with laboratory data to identify the region(s) that lie outside the validity of the model assumptions and to quantify the uncertainties within its range of application. This paper presents numerical simulations of neutralizer hollow cathodes at various operating conditions and orifice sizes. The simulations were performed using a two-dimensional axisymmetric model that solves numerically a relatively extensive system of conservation laws for the partially ionized gas in these devices. A summary of the comparisons between simulation results and Langmuir probe measurements is provided. The model has also been employed to provide insight into recent ground test observations of the neutralizer cathode in NEXT. It is found that a likely cause of the observed keeper voltage drop is cathode orifice erosion. However, due to the small magnitude of this change, is approx. 0.5 V (less than 5% of the beginning-of-life value) over 10 khrs, and in light of the large uncertainties of the cathode material sputtering yield at low ion energies, other causes cannot be excluded. Preliminary simulations to understand transition to plume mode suggest that in the range of 3-5 sccm the existing 2-D model reproduces fairly well the rise of the keeper voltage in the NEXT neutralizer as observed in the laboratory. At lower flow rates the simulation produces oscillations in the keeper current and voltage that require prohibitively small time-steps to resolve with the existing algorithms.
Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.
2017-12-01
The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.
Arctic sea ice albedo from AVHRR
NASA Technical Reports Server (NTRS)
Lindsay, R. W.; Rothrock, D. A.
1994-01-01
The seasonal cycle of surface albedo of sea ice in the Arctic is estimated from measurements made with the Advanced Very High Resolution Radiometer (AVHRR) on the polar-orbiting satellites NOAA-10 and NOAA-11. The albedos of 145 200-km-square cells are analyzed. The cells are from March through September 1989 and include only those for which the sun is more than 10 deg above the horizon. Cloud masking is performed manually. Corrections are applied for instrument calibration, nonisotropic reflection, atmospheric interference, narrowband to broadband conversion, and normalization to a common solar zenith angle. The estimated albedos are relative, with the instrument gain set to give an albedo of 0.80 for ice floes in March and April. The mean values for the cloud-free portions of individual cells range from 0.18 to 0.91. Monthly averages of cells in the central Arctic range from 0.76 in April to 0.47 in August. The monthly averages of the within-cell standard deviations in the central Arctic are 0.04 in April and 0.06 in September. The surface albedo and surface temperature are correlated most strongly in March (R = -0.77) with little correlation in the summer. The monthly average lead fraction is determined from the mean potential open water, a scaled representation of the temperature or albedo between 0.0 (for ice) and 1.0 (for water); in the central Arctic it rises from an average 0.025 in the spring to 0.06 in September. Sparse data on aerosols, ozone, and water vapor in the atmospheric column contribute uncertainties to instantaneous, area-average albedos of 0.13, 0.04, and 0.08. Uncertainties in monthly average albedos are not this large. Contemporaneous estimation of these variables could reduce the uncertainty in the estimated albedo considerably. The poor calibration of AVHRR channels 1 and 2 is another large impediment to making accurate albedo estimates.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
NASA Astrophysics Data System (ADS)
Allstadt, K. E.; Shean, D. E.; Campbell, A.; Fahnestock, M.; Malone, S. D.
2015-12-01
We present surface velocity maps derived from repeat terrestrial radar interferometry (TRI) measurements and use these time series to examine seasonal and diurnal dynamics of alpine glaciers at Mount Rainier, Washington. We show that the Nisqually and Emmons glaciers have small slope-parallel velocities near the summit (< 0.2 m day-1), high velocities over their upper and central regions (1.0-1.5 m day-1), and stagnant debris-covered regions near the terminus (< 0.05 m day-1). Velocity uncertainties are as low as ±0.02-0.08 m day-1. We document a large seasonal velocity decrease of 0.2-0.7 m day-1 (-25 to -50 %) from July to November for most of the Nisqually Glacier, excluding the icefall, suggesting significant seasonal subglacial water storage under most of the glacier. We did not detect diurnal variability above the noise level. Simple 2-D ice flow modeling using TRI velocities suggests that sliding accounts for 91 and 99 % of the July velocity field for the Emmons and Nisqually glaciers with possible ranges of 60-97 and 93-99.5 %, respectively, when considering model uncertainty. We validate our observations against recent in situ velocity measurements and examine the long-term evolution of Nisqually Glacier dynamics through comparisons with historical velocity data. This study shows that repeat TRI measurements with > 10 km range can be used to investigate spatial and temporal variability of alpine glacier dynamics over large areas, including hazardous and inaccessible areas.
Uncertainty in Climate Change Research: An Integrated Approach
NASA Astrophysics Data System (ADS)
Mearns, L.
2017-12-01
Uncertainty has been a major theme in research regarding climate change from virtually the very beginning. And appropriately characterizing and quantifying uncertainty has been an important aspect of this work. Initially, uncertainties were explored regarding the climate system and how it would react to future forcing. A concomitant area of concern was viewed in the future emissions and concentrations of important forcing agents such as greenhouse gases and aerosols. But, of course we know there are important uncertainties in all aspects of climate change research, not just that of the climate system and emissions. And as climate change research has become more important and of pragmatic concern as possible solutions to the climate change problem are addressed, exploring all the relevant uncertainties has become more relevant and urgent. More recently, over the past five years or so, uncertainties in impacts models, such as agricultural and hydrological models, have received much more attention, through programs such as AgMIP, and some research in this arena has indicated that the uncertainty in the impacts models can be as great or greater than that in the climate system. Still there remains other areas of uncertainty that remain underexplored and/or undervalued. This includes uncertainty in vulnerability and governance. Without more thoroughly exploring these last uncertainties, we likely will underestimate important uncertainties particularly regarding how different systems can successfully adapt to climate change . In this talk I will discuss these different uncertainties and how to combine them to give a complete picture of the total uncertainty individual systems are facing. And as part of this, I will discuss how the uncertainty can be successfully managed even if it is fairly large and deep. Part of my argument will be that large uncertainty is not the enemy, but rather false certainty is the true danger.
Deep uncertainty and broad heterogeneity in country-level social cost of carbon
NASA Astrophysics Data System (ADS)
Ricke, K.; Drouet, L.; Caldeira, K.; Tavoni, M.
2017-12-01
The social cost of carbon (SCC) is a commonly employed metric of the expected economic damages expected from carbon dioxide (CO2) emissions. Recent estimates of SCC range from approximately 10/tonne of CO2 to as much as 1000/tCO2, but these have been computed at the global level. While useful in an optimal policy context, a world-level approach obscures the heterogeneous geography of climate damages and vast differences in country-level contributions to global SCC, as well as climate and socio-economic uncertainties, which are much larger at the regional level. For the first time, we estimate country-level contributions to SCC using recent climate and carbon-cycle model projections, empirical climate-driven economic damage estimations, and information from the Shared Socio-economic Pathways. Central specifications show high global SCC values (median: 417 /tCO2, 66% confidence intervals: 168 - 793 /tCO2) with country-level contributions ranging from -11 (-8 - -14) /tCO2 to 86 (50 - 158) /tCO2. We quantify climate-, scenario- and economic damage- driven uncertainties associated with the calculated values of SCC. We find that while the magnitude of country-level social cost of carbon is highly uncertain, the relative positioning among countries is consistent. Countries incurring large fractions of the global cost include India, China, and the United States. The share of SCC distributed among countries is robust, indicating climate change winners and losers from a geopolitical perspective.
Data Analysis and Statistical Methods for the Assessment and Interpretation of Geochronologic Data
NASA Astrophysics Data System (ADS)
Reno, B. L.; Brown, M.; Piccoli, P. M.
2007-12-01
Ages are traditionally reported as a weighted mean with an uncertainty based on least squares analysis of analytical error on individual dates. This method does not take into account geological uncertainties, and cannot accommodate asymmetries in the data. In most instances, this method will understate uncertainty on a given age, which may lead to over interpretation of age data. Geologic uncertainty is difficult to quantify, but is typically greater than analytical uncertainty. These factors make traditional statistical approaches inadequate to fully evaluate geochronologic data. We propose a protocol to assess populations within multi-event datasets and to calculate age and uncertainty from each population of dates interpreted to represent a single geologic event using robust and resistant statistical methods. To assess whether populations thought to represent different events are statistically separate exploratory data analysis is undertaken using a box plot, where the range of the data is represented by a 'box' of length given by the interquartile range, divided at the median of the data, with 'whiskers' that extend to the furthest datapoint that lies within 1.5 times the interquartile range beyond the box. If the boxes representing the populations do not overlap, they are interpreted to represent statistically different sets of dates. Ages are calculated from statistically distinct populations using a robust tool such as the tanh method of Kelsey et al. (2003, CMP, 146, 326-340), which is insensitive to any assumptions about the underlying probability distribution from which the data are drawn. Therefore, this method takes into account the full range of data, and is not drastically affected by outliers. The interquartile range of each population of dates (the interquartile range) gives a first pass at expressing uncertainty, which accommodates asymmetry in the dataset; outliers have a minor affect on the uncertainty. To better quantify the uncertainty, a resistant tool that is insensitive to local misbehavior of data is preferred, such as the normalized median absolute deviations proposed by Powell et al. (2002, Chem Geol, 185, 191-204). We illustrate the method using a dataset of 152 monazite dates determined using EPMA chemical data from a single sample from the Neoproterozoic Brasília Belt, Brazil. Results are compared with ages and uncertainties calculated using traditional methods to demonstrate the differences. The dataset was manually culled into three populations representing discrete compositional domains within chemically-zoned monazite grains. The weighted mean ages and least squares uncertainties for these populations are 633±6 (2σ) Ma for a core domain, 614±5 (2σ) Ma for an intermediate domain and 595±6 (2σ) Ma for a rim domain. Probability distribution plots indicate asymmetric distributions of all populations, which cannot be accounted for with traditional statistical tools. These three domains record distinct ages outside the interquartile range for each population of dates, with the core domain lying in the subrange 642-624 Ma, the intermediate domain 617-609 Ma and the rim domain 606-589 Ma. The tanh estimator yields ages of 631±7 (2σ) for the core domain, 616±7 (2σ) for the intermediate domain and 601±8 (2σ) for the rim domain. Whereas the uncertainties derived using a resistant statistical tool are larger than those derived from traditional statistical tools, the method yields more realistic uncertainties that better address the spread in the dataset and account for asymmetry in the data.
NASA Astrophysics Data System (ADS)
Gorbunov, Michael E.; Kirchengast, Gottfried
2018-01-01
A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and random uncertainties are propagated from excess phase to bending angle profiles, using a perturbation approach and the wave-optical method recently introduced by Gorbunov and Kirchengast (2015), starting with estimated excess phase uncertainties. The results are encouraging and this uncertainty propagation approach combined with BLB correction enables a robust reduction and quantification of the uncertainties of excess phases and bending angles in the lower troposphere.
Measurement of the Muon Content of Air Showers with IceTop
NASA Astrophysics Data System (ADS)
Gonzalez, JG;
2016-05-01
IceTop, the surface component of the IceCube detector, has measured the energy spectrum of cosmic ray primaries in the range between 1.6 PeV and 1.3 EeV. IceTop can also be used to measure the average density of GeV muons in the shower front at large radial distances (> 300 m) from the shower axis. Wei present the measurement of the muon lateral distribution function for primary cosmic rays with energies between 1.6 PeV and about 0.1 EeV, and compare it to proton and iron simulations. We also discuss how this information can be exploited in the reconstruction of single air shower events. By combining the information on the muon component with that of the electromagnetic component of the air shower, we expect to reduce systematic uncertainties in the inferred mass composition of cosmic rays arising from theoretical uncertainties in hadronic interaction models.
Performance of the ATLAS muon trigger in pp collisions at [Formula: see text] TeV.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batley, J R; Battaglia, M; Battistin, M; Bauer, F; Bawa, H S; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boddy, C R; Boehler, M; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J A; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Dobos, D; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franconi, L; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Guan, L; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Gunther, J; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböeck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hoffman, J; Hoffmann, D; Hofmann, J I; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keil, M; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, F; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maier, A A; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrella, S; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Spurlock, B; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
The performance of the ATLAS muon trigger system is evaluated with proton-proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of [Formula: see text] bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum [Formula: see text] GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The [Formula: see text] range for efficiency determination is extended by using muons from decays of [Formula: see text] mesons, [Formula: see text] bosons, and top quarks. The muon trigger shows highly uniform and stable performance. The performance is compared to the prediction of a detailed simulation.
Climate Sensitivity Controls Uncertainty in Future Terrestrial Carbon Sink
NASA Astrophysics Data System (ADS)
Schurgers, Guy; Ahlström, Anders; Arneth, Almut; Pugh, Thomas A. M.; Smith, Benjamin
2018-05-01
For the 21st century, carbon cycle models typically project an increase of terrestrial carbon with increasing atmospheric CO2 and a decrease with the accompanying climate change. However, these estimates are poorly constrained, primarily because they typically rely on a limited number of emission and climate scenarios. Here we explore a wide range of combinations of CO2 rise and climate change and assess their likelihood with the climate change responses obtained from climate models. Our results demonstrate that the terrestrial carbon uptake depends critically on the climate sensitivity of individual climate models, representing a large uncertainty of model estimates. In our simulations, the terrestrial biosphere is unlikely to become a strong source of carbon with any likely combination of CO2 and climate change in the absence of land use change, but the fraction of the emissions taken up by the terrestrial biosphere will decrease drastically with higher emissions.
Theoretical and Experimental K+ + Nucleus Total and Reaction Cross Sections from the KDP-RIA Model
NASA Astrophysics Data System (ADS)
Kerr, L. K.; Clark, B. C.; Hama, S.; Ray, L.; Hoffmann, G. W.
2000-02-01
The 5-dimensional spin-0 form of the Kemmer-Duffin-Petiau (KDP) equation is used to calculate scattering observables [elastic differential cross sections (dσ / dΩ), total cross sections (σ Tot ), and total reaction cross sections (σ Reac )] and to deduce σ Tot and σReac from transmission data for K+ + 6Li, 12C, 28Si and 40Ca at several momenta in the range 488 - 714 MeV / c. Realistic uncertainties are generated for the theoretical predictions. These errors, mainly due to uncertainties associated with the elementary K+ + nucleon amplitudes, are large, which may account for some of the disagreement between experimental and theoretical σTot and σReac. The results suggest that the K+ + nucleon amplitudes need to be much better determined before further improvement in the understanding of these data can occur.
Effect of Refractive Index Variation on Two-Wavelength Interferometry for Fluid Measurements
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
1998-01-01
Two wavelength interferometry can in principle be used to measure changes in both temperature and concentration in a fluid, but measurement errors may be large if the fluid dispersion is small. This paper quantifies the effects of uncertainties in dn/dT and dn/dC on the measured temperature and concentration when using the simple expression dn = (dn/dT)dT + (dn/dC)dC. For the data analyzed here, ammonium chloride in water from -5 to 10(exp infinity) C over a concentration range of 2-14% and for wavelengths 514.5 and 633 nm, it is shown that the gradients must be known to within 0.015% to produce a modest 10% uncertainty in the measured temperature and concentration. These results show that real care must be taken to ensure the accuracy of refractive index gradients when using two wavelength interferometry for the simultaneous measurement of temperature and concentration.
Stone, M; Collins, A L; Silins, U; Emelko, M B; Zhang, Y S
2014-03-01
There is increasing global concern regarding the impacts of large scale land disturbance by wildfire on a wide range of water and related ecological services. This study explores the impact of the 2003 Lost Creek wildfire in the Crowsnest River basin, Alberta, Canada on regional scale sediment sources using a tracing approach. A composite geochemical fingerprinting procedure was used to apportion the sediment efflux among three key spatial sediment sources: 1) unburned (reference) 2) burned and 3) burned sub-basins that were subsequently salvage logged. Spatial sediment sources were characterized by collecting time-integrated suspended sediment samples using passive devices during the entire ice free periods in 2009 and 2010. The tracing procedure combines the Kruskal-Wallis H-test, principal component analysis and genetic-algorithm driven discriminant function analysis for source discrimination. Source apportionment was based on a numerical mass balance model deployed within a Monte Carlo framework incorporating both local optimization and global (genetic algorithm) optimization. The mean relative frequency-weighted average median inputs from the three spatial source units were estimated to be 17% (inter-quartile uncertainty range 0-32%) from the reference areas, 45% (inter-quartile uncertainty range 25-65%) from the burned areas and 38% (inter-quartile uncertainty range 14-59%) from the burned-salvage logged areas. High sediment inputs from burned and the burned-salvage logged areas, representing spatial source units 2 and 3, reflect the lasting effects of forest canopy and forest floor organic matter disturbance during the 2003 wildfire including increased runoff and sediment availability related to high terrestrial erosion, streamside mass wasting and river bank collapse. The results demonstrate the impact of wildfire and incremental pressures associated with salvage logging on catchment spatial sediment sources in higher elevation Montane regions where forest growth and vegetation recovery are relatively slow. Copyright © 2013 Elsevier B.V. All rights reserved.
Uncertainty estimation of simulated water levels for the Mitch flood event in Tegucigalpa
NASA Astrophysics Data System (ADS)
Fuentes Andino, Diana Carolina; Halldin, Sven; Keith, Beven; Chong-Yu, Xu
2013-04-01
Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Due to the extremely large magnitude of the Mitch flood, hydrometric measurements were not taken during the event. However, post-event indirect measurements of the discharge were obtained by the U.S. Geological Survey (USGS) and post-event surveyed high water marks were obtained by the Japan International Cooperation agency (JICA). This work proposes a methodology to simulate the water level during the Mitch event when the available data is associated with large uncertainty. The results of the two-dimensional hydrodynamic model LISFLOOD-FP will be evaluated using the Generalized Uncertainty Estimation (GLUE) framework. The main challenge in the proposed methodology is to formulate an approach to evaluate the model results when there are large uncertainties coming from both the model parameters and the evaluation data.
Uncertainty of Polarized Parton Distributions
NASA Astrophysics Data System (ADS)
Hirai, M.; Goto, Y.; Horaguchi, T.; Kobayashi, H.; Kumano, S.; Miyama, M.; Saito, N.; Shibata, T.-A.
Polarized parton distribution functions are determined by a χ2 analysis of polarized deep inelastic experimental data. In this paper, uncertainty of obtained distribution functions is investigated by a Hessian method. We find that the uncertainty of the polarized gluon distribution is fairly large. Then, we estimate the gluon uncertainty by including the fake data which are generated from prompt photon process at RHIC. We observed that the uncertainty could be reduced with these data.
Uncertainties in Climate Change, Following the Causal Chain from Human Activities
NASA Astrophysics Data System (ADS)
Prather, M. J.; Match Group,.
2009-12-01
As part of a UNFCCC initiative to attribute climate change to individual countries, a research group (MATCH) examined the quantifiable link between emissions and climate change. A constrained propagation of errors was developed that tracks uncertainties from reporting human activities to greenhouse gas emissions, to increasing abundances of greenhouse gases, to radiative forcing of climate, and finally to climate change. As a case study, we consider the causal chain for greenhouse gases emitted by developed nations since national reporting began in 1990. We combine uncertainties in the forward modeling at each step with top-down constraints on the observed changes in greenhouse gases and temperatures, although the propagation of uncertainties remains problematical. In this study, we find that global surface temperature increased by +0.11 C in 2003 due to the developed nations’ emissions of Kyoto greenhouse gases from 1990 to 2002 with a 68%-confidence uncertainty range of +0.08 C to +0.14 C. Uncertainties in climate response dominate this overall range, but uncertainties in emissions, particularly for land-use change and forestry and the non-CO2 greenhouse gases, are responsible for almost half. Bar chart of RF components & 68%-confidence intervals averaged over first and last half of 20th century, showing importance of volcanoes. Reduction in atmospheric CO2 (ppm) relative to observed increase as calculated without Annex-I(reporting) emissions, showing the 16%-to-84%-confidence range.
Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele
QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less
2011-01-01
Background Historic carbon emissions are an important foundation for proposed efforts to Reduce Emissions from Deforestation and forest Degradation and enhance forest carbon stocks through conservation and sustainable forest management (REDD+). The level of uncertainty in historic carbon emissions estimates is also critical for REDD+, since high uncertainties could limit climate benefits from credited mitigation actions. Here, we analyzed source data uncertainties based on the range of available deforestation, forest degradation, and forest carbon stock estimates for the Brazilian state of Mato Grosso during 1990-2008. Results Deforestation estimates showed good agreement for multi-year periods of increasing and decreasing deforestation during the study period. However, annual deforestation rates differed by > 20% in more than half of the years between 1997-2008, even for products based on similar input data. Tier 2 estimates of average forest carbon stocks varied between 99-192 Mg C ha-1, with greatest differences in northwest Mato Grosso. Carbon stocks in deforested areas increased over the study period, yet this increasing trend in deforested biomass was smaller than the difference among carbon stock datasets for these areas. Conclusions Estimates of source data uncertainties are essential for REDD+. Patterns of spatial and temporal disagreement among available data products provide a roadmap for future efforts to reduce source data uncertainties for estimates of historic forest carbon emissions. Specifically, regions with large discrepancies in available estimates of both deforestation and forest carbon stocks are priority areas for evaluating and improving existing estimates. Full carbon accounting for REDD+ will also require filling data gaps, including forest degradation and secondary forest, with annual data on all forest transitions. PMID:22208947
CFHTLenS revisited: assessing concordance with Planck including astrophysical systematics
NASA Astrophysics Data System (ADS)
Joudaki, Shahab; Blake, Chris; Heymans, Catherine; Choi, Ami; Harnois-Deraps, Joachim; Hildebrandt, Hendrik; Joachimi, Benjamin; Johnson, Andrew; Mead, Alexander; Parkinson, David; Viola, Massimo; van Waerbeke, Ludovic
2017-02-01
We investigate the impact of astrophysical systematics on cosmic shear cosmological parameter constraints from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) and the concordance with cosmic microwave background measurements by Planck. We present updated CFHTLenS cosmic shear tomography measurements extended to degree scales using a covariance calibrated by a new suite of N-body simulations. We analyse these measurements with a new model fitting pipeline, accounting for key systematic uncertainties arising from intrinsic galaxy alignments, baryonic effects in the non-linear matter power spectrum, and photometric redshift uncertainties. We examine the impact of the systematic degrees of freedom on the cosmological parameter constraints, both independently and jointly. When the systematic uncertainties are considered independently, the intrinsic alignment amplitude is the only degree of freedom that is substantially preferred by the data. When the systematic uncertainties are considered jointly, there is no consistently strong preference in favour of the more complex models. We quantify the level of concordance between the CFHTLenS and Planck data sets by employing two distinct data concordance tests, grounded in Bayesian evidence and information theory. We find that the two data concordance tests largely agree with one another and that the level of concordance between the CFHTLenS and Planck data sets is sensitive to the exact details of the systematic uncertainties included in our analysis, ranging from decisive discordance to substantial concordance as the treatment of the systematic uncertainties becomes more conservative. The least conservative scenario is the one most favoured by the cosmic shear data, but it is also the one that shows the greatest degree of discordance with Planck. The data and analysis code are publicly available at https://github.com/sjoudaki/cfhtlens_revisited.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
Uncertainty, robustness, and the value of information in managing a population of northern bobwhites
Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael
2014-01-01
The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about system dynamics and the effects of management.
Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes
NASA Astrophysics Data System (ADS)
Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.
2017-11-01
Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.
Radiation health for a Mars mission
NASA Technical Reports Server (NTRS)
Robbins, Donald E.
1992-01-01
Uncertainties in risk assessments for exposure of a Mars mission crew to space radiation place limitations on mission design and operation. Large shielding penalties are imposed in order to obtain acceptable safety margins. Galactic cosmic rays (GCR) and solar particle events (SPE) are the major concern. A warning system and 'safe-haven' are needed to protect the crew from large SPE which produce lethal doses. A model developed at NASA Johnson Space Center (JSC) to describe solar modulation of GCR intensities reduces that uncertainty to less than 10 percent. Radiation transport models used to design spacecraft shielding have large uncertainties in nuclear fragmentation cross sections for GCR which interact with spacecraft materials. Planned space measurements of linear energy transfer (LET) spectra behind various shielding thicknesses will reduce uncertainties in dose-versus-shielding thickness relationships to 5-10 percent. The largest remaining uncertainty is in biological effects of space radiation. Data on effects of energetic ions in human are nonexistent. Experimental research on effects in animals and cell is needed to allow extrapolation to the risk of carcinogenesis in humans.
Assessment of Uncertainty-Infused Scientific Argumentation
ERIC Educational Resources Information Center
Lee, Hee-Sun; Liu, Ou Lydia; Pallant, Amy; Roohr, Katrina Crotts; Pryputniewicz, Sarah; Buck, Zoë E.
2014-01-01
Though addressing sources of uncertainty is an important part of doing science, it has largely been neglected in assessing students' scientific argumentation. In this study, we initially defined a scientific argumentation construct in four structural elements consisting of claim, justification, uncertainty qualifier, and uncertainty…
NASA Technical Reports Server (NTRS)
Martin, M. W.; Kubiak, E. T.
1982-01-01
A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.
Uncertainties in future-proof decision-making: the Dutch Delta Model
NASA Astrophysics Data System (ADS)
IJmker, Janneke; Snippen, Edwin; Ruijgh, Erik
2013-04-01
In 1953, a number of European countries experienced flooding after a major storm event coming from the northwest. Over 2100 people died of the resulting floods, 1800 of them being Dutch. This gave rise to the development of the so-called Delta Works and Zuiderzee Works that strongly reduced the flood risk in the Netherlands. These measures were a response to a large flooding event. As boundary conditions have changed (increasing population, increasing urban development, etc.) , the flood risk should be evaluated continuously, and measures should be taken if necessary. The Delta Programme was designed to be prepared for future changes and to limit the flood risk, taking into account economics, nature, landscape, residence and recreation . To support decisions in the Delta Programme, the Delta Model was developed. By using four different input scenarios (extremes in climate and economics) and variations in system setup, the outcomes of the Delta Model represent a range of possible outcomes for the hydrological situation in 2050 and 2100. These results flow into effect models that give insight in the integrated effects on freshwater supply (including navigation, industry and ecology) and flood risk. As the long-term water management policy of the Netherlands for the next decades will be based on these results, they have to be reliable. Therefore, a study was carried out to investigate the impact of uncertainties on the model outcomes. The study focused on "known unknowns": uncertainties in the boundary conditions, in the parameterization and in the model itself. This showed that for different parts of the Netherlands, the total uncertainty is in the order of meters! Nevertheless, (1) the total uncertainty is dominated by uncertainties in boundary conditions. Internal model uncertainties are subordinate to that. Furthermore, (2) the model responses develop in a logical way, such that the exact model outcomes might be uncertain, but the outcomes of different model runs are reliable relative to each other. The Delta Model therefore is a reliable instrument for finding the optimal water management policy for the future. As the exact model outcomes show a high degree of uncertainty, the model analysis will be on a large numbers of model runs to gain insight in the sensitivity of the model for different setups and boundary conditions. The results allow fast investigation of (relative) effects of measures. Furthermore, it helps to identify bottlenecks in the system. To summarize, the Delta Model is a tool for policy makers to base their policy strategies on quantitative rather than qualitative information. It can be applied to the current and future situation, and feeds the political discussion. The uncertainty of the model has no determinative effect on the analysis that can be done by the Delta Model.
The state of the art of the impact of sampling uncertainty on measurement uncertainty
NASA Astrophysics Data System (ADS)
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
Can hydraulic-modelled rating curves reduce uncertainty in high flow data?
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Lam, Norris; Lyon, Steve W.
2017-04-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.
Accounting for range uncertainties in the optimization of intensity modulated proton therapy.
Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas
2007-05-21
Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.
Embracing uncertainty in applied ecology.
Milner-Gulland, E J; Shea, K
2017-12-01
Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S
2013-10-01
Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.
CHEERS: Chemical enrichment of clusters of galaxies measured using a large XMM-Newton sample
NASA Astrophysics Data System (ADS)
de Plaa, J.; Mernier, F.; Kaastra, J.; Pinto, C.
2017-10-01
The Chemical Enrichment RGS Sample (CHEERS) is aimed to be a sample of the most optimal clusters of galaxies for observation with the Reflection Grating Spectrometer (RGS) aboard XMM-Newton. It consists of 5 Ms of deep cluster observations of 44 objects obtained through a very large program and archival observations. The main goal is to measure chemical abundances in the hot Intra-Cluster Medium (ICM) of clusters to provide constraints on chemical evolution models. Especially the origin and evolution of type Ia supernovae is still poorly known and X-ray observations could contribute to constrain models regarding the SNIa explosion mechanism. Due to the high quality of the data, the uncertainties on the abundances are dominated by systematic effects. By carefully treating each systematic effect, we increase the accuracy or estimate the remaining uncertainty on the measurement. The resulting abundances are then compared to supernova models. In addition, also radial abundance profiles are derived. In the talk, we present an overview of the results that the CHEERS collaboration obtained based on the CHEERS data. We focus on the abundance measurements. The other topics range from turbulence measurements through line broadening to cool gas in groups.
Ackermann, M.; Ajello, M.; Albert, A.; ...
2012-10-12
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy γ-ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission, the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the instrument response functions (IRFs), the description of the instrument performance provided for data analysis. In this study, we describe the effects thatmore » motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. In conclusion, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.« less
Moore, C.T.; Lonsdorf, E.V.; Knutson, M.G.; Laskowski, H.P.; Lor, S.K.
2011-01-01
Adaptive management is an approach to recurrent decision making in which uncertainty about the decision is reduced over time through comparison of outcomes predicted by competing models against observed values of those outcomes. The National Wildlife Refuge System (NWRS) of the U.S. Fish and Wildlife Service is a large land management program charged with making natural resource management decisions, which often are made under considerable uncertainty, severe operational constraints, and conditions that limit ability to precisely carry out actions as intended. The NWRS presents outstanding opportunities for the application of adaptive management, but also difficult challenges. We describe two cooperative programs between the Fish and Wildlife Service and the U.S. Geological Survey to implement adaptive management at scales ranging from small, single refuge applications to large, multi-refuge, multi-region projects. Our experience to date suggests three important attributes common to successful implementation: a vigorous multi-partner collaboration, practical and informative decision framework components, and a sustained commitment to the process. Administrators in both agencies should consider these attributes when developing programs to promote the use and acceptance of adaptive management in the NWRS. ?? 2010 .
Climate change impacts on extreme events in the United States: an uncertainty analysis
Extreme weather and climate events, such as heat waves, droughts and severe precipitation events, have substantial impacts on ecosystems and the economy. However, future climate simulations display large uncertainty in mean changes. As a result, the uncertainty in future changes ...
Subaru Telescope limits on cosmological variations in the fine-structure constant
NASA Astrophysics Data System (ADS)
Murphy, Michael T.; Cooksey, Kathy L.
2017-11-01
Previous, large samples of quasar absorption spectra have indicated some evidence for relative variations in the fine-structure constant (Δα/α) across the sky. However, they were likely affected by long-range distortions of the wavelength calibration, so it is important to establish a statistical sample of more reliable results from multiple telescopes. Here we triple the sample of Δα/α measurements from the Subaru Telescope which have been `supercalibrated' to correct for long-range distortions. A blinded analysis of the metallic ions in six intervening absorption systems in two Subaru quasar spectra provides no evidence for α variation, with a weighted mean of Δα/α = 3.0 ± 2.8stat ± 2.0sys parts per million (1σ statistical and systematic uncertainties). The main remaining systematic effects are uncertainties in the long-range distortion corrections, absorption profile models, and errors from redispersing multiple quasar exposures on to a common wavelength grid. The results also assume that terrestrial isotopic abundances prevail in the absorbers; assuming only the dominant terrestrial isotope is present significantly lowers Δα/α, though it is still consistent with zero. Given the location of the two quasars on the sky, our results do not support the evidence for spatial α variation, especially when combined with the 21 other recent measurements which were corrected for, or resistant to, long-range distortions. Our spectra and absorption profile fits are publicly available.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.
2016-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Koornneef, Joris; Spruijt, Mark; Molag, Menso; Ramírez, Andrea; Turkenburg, Wim; Faaij, André
2010-05-15
A systematic assessment, based on an extensive literature review, of the impact of gaps and uncertainties on the results of quantitative risk assessments (QRAs) for CO(2) pipelines is presented. Sources of uncertainties that have been assessed are: failure rates, pipeline pressure, temperature, section length, diameter, orifice size, type and direction of release, meteorological conditions, jet diameter, vapour mass fraction in the release and the dose-effect relationship for CO(2). A sensitivity analysis with these parameters is performed using release, dispersion and impact models. The results show that the knowledge gaps and uncertainties have a large effect on the accuracy of the assessed risks of CO(2) pipelines. In this study it is found that the individual risk contour can vary between 0 and 204 m from the pipeline depending on assumptions made. In existing studies this range is found to be between <1m and 7.2 km. Mitigating the relevant risks is part of current practice, making them controllable. It is concluded that QRA for CO(2) pipelines can be improved by validation of release and dispersion models for high-pressure CO(2) releases, definition and adoption of a universal dose-effect relationship and development of a good practice guide for QRAs for CO(2) pipelines. Copyright (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Han Soo; Shimoyama, Tomohisa; Popinet, Stéphane
2015-10-01
The impacts of tides on extreme tsunami propagation due to potential Nankai Trough earthquakes in the Seto Inland Sea (SIS), Japan, are investigated through numerical experiments. Tsunami experiments are conducted based on five scenarios that consider tides at four different phases, such as flood, high, ebb, and low tides. The probes that were selected arbitrarily in the Bungo and Kii Channels show less significant effects of tides on tsunami heights and the arrival times of the first waves than those that experience large tidal ranges in inner basins and bays of the SIS. For instance, the maximum tsunami height and the arrival time at Toyomaesi differ by more than 0.5 m and nearly 1 h, respectively, depending on the tidal phase. The uncertainties defined in terms of calculated maximum tsunami heights due to tides illustrate that the calculated maximum tsunami heights in the inner SIS with standing tides have much larger uncertainties than those of two channels with propagating tides. Particularly in Harima Nada, the uncertainties due to the impacts of tides are greater than 50% of the tsunami heights without tidal interaction. The results recommend simulate tsunamis together with tides in shallow water environments to reduce the uncertainties involved with tsunami modeling and predictions for tsunami hazards preparedness. This article was corrected on 26 OCT 2015. See the end of the full text for details.
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
NASA Astrophysics Data System (ADS)
Van Uytven, Els; Willems, Patrick
2017-04-01
Current trends in the hydro-meteorological variables indicate the potential impact of climate change on hydrological extremes. Therefore, they trigger an increased importance climate adaptation strategies in water management. The impact of climate change on hydro-meteorological and hydrological extremes is, however, highly uncertain. This is due to uncertainties introduced by the climate models, the internal variability inherent to the climate system, the greenhouse gas scenarios and the statistical downscaling methods. In view of the need to define sustainable climate adaptation strategies, there is a need to assess these uncertainties. This is commonly done by means of ensemble approaches. Because more and more climate models and statistical downscaling methods become available, there is a need to facilitate the climate impact and uncertainty analysis. A Climate Perturbation Tool has been developed for that purpose, which combines a set of statistical downscaling methods including weather typing, weather generator, transfer function and advanced perturbation based approaches. By use of an interactive interface, climate impact modelers can apply these statistical downscaling methods in a semi-automatic way to an ensemble of climate model runs. The tool is applicable to any region, but has been demonstrated so far to cases in Belgium, Suriname, Vietnam and Bangladesh. Time series representing future local-scale precipitation, temperature and potential evapotranspiration (PET) conditions were obtained, starting from time series of historical observations. Uncertainties on the future meteorological conditions are represented in two different ways: through an ensemble of time series, and a reduced set of synthetic scenarios. The both aim to span the full uncertainty range as assessed from the ensemble of climate model runs and downscaling methods. For Belgium, for instance, use was made of 100-year time series of 10-minutes precipitation observations and daily temperature and PET observations at Uccle and a large ensemble of 160 global climate model runs (CMIP5). They cover all four representative concentration pathway based greenhouse gas scenarios. While evaluating the downscaled meteorological series, particular attention was given to the performance of extreme value metrics (e.g. for precipitation, by means of intensity-duration-frequency statistics). Moreover, the total uncertainty was decomposed in the fractional uncertainties for each of the uncertainty sources considered. Research assessing the additional uncertainty due to parameter and structural uncertainties of the hydrological impact model is ongoing.
Exploration–exploitation trade-off features a saltatory search behaviour
Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina
2013-01-01
Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535
The large area crop inventory experiment: A major demonstration of space remote sensing
NASA Technical Reports Server (NTRS)
Macdonald, R. B.; Hall, F. G.
1977-01-01
Strategies are presented in agricultural technology to increase the resistance of crops to a wider range of meteorological conditions in order to reduce year-to-year variations in crop production. Uncertainties in agricultral production, together with the consumer demands of an increasing world population, have greatly intensified the need for early and accurate annual global crop production forecasts. These forecasts must predict fluctuation with an accuracy, timeliness and known reliability sufficient to permit necessary social and economic adjustments, with as much advance warning as possible.
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
Bender, Christopher M; Ballard, Megan S; Wilson, Preston S
2014-06-01
The overall goal of this work is to quantify the effects of environmental variability and spatial sampling on the accuracy and uncertainty of estimates of the three-dimensional ocean sound-speed field. In this work, ocean sound speed estimates are obtained with acoustic data measured by a sparse autonomous observing system using a perturbative inversion scheme [Rajan, Lynch, and Frisk, J. Acoust. Soc. Am. 82, 998-1017 (1987)]. The vertical and horizontal resolution of the solution depends on the bandwidth of acoustic data and on the quantity of sources and receivers, respectively. Thus, for a simple, range-independent ocean sound speed profile, a single source-receiver pair is sufficient to estimate the water-column sound-speed field. On the other hand, an environment with significant variability may not be fully characterized by a large number of sources and receivers, resulting in uncertainty in the solution. This work explores the interrelated effects of environmental variability and spatial sampling on the accuracy and uncertainty of the inversion solution though a set of case studies. Synthetic data representative of the ocean variability on the New Jersey shelf are used.
Invited Review Article: Measurements of the Newtonian constant of gravitation, G.
Rothleitner, C; Schlamminger, S
2017-11-01
By many accounts, the Newtonian constant of gravitation G is the fundamental constant that is most difficult to measure accurately. Over the past three decades, more than a dozen precision measurements of this constant have been performed. However, the scatter of the data points is much larger than the uncertainties assigned to each individual measurement, yielding a Birge ratio of about five. Today, G is known with a relative standard uncertainty of 4.7 × 10 -5 , which is several orders of magnitudes greater than the relative uncertainties of other fundamental constants. In this article, various methods to measure G are discussed. A large array of different instruments ranging from the simple torsion balance to the sophisticated atom interferometer can be used to determine G. Some instruments, such as the torsion balance can be used in several different ways. In this article, the advantages and disadvantages of different instruments as well as different methods are discussed. A narrative arc from the historical beginnings of the different methods to their modern implementation is given. Finally, the article ends with a brief overview of the current state of the art and an outlook.
Sensitivity of water resources in the Delaware River basin to climate variability and change
Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.
1994-01-01
Because of the greenhouse effect, projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climate change; and presents the results of sensitivity analyses of how climate change might affect water resources in the Delaware River basin. Sensitivity analyses suggest that potentially serious shortfalls of certain water resources in the basin could result if some scenarios for climate change come true . The results of model simulations of the basin streamflow demonstrate the difficulty in distinguishing the effects that climate change versus natural climate variability have on streamflow and water supply . The future direction of basin changes in most water resources, furthermore, cannot be precisely determined because of uncertainty in current projections of regional temperature and precipitation . This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant . The sensitivity analyses could be useful in developing contingency plans for evaluating and responding to changes, should they occur.
In situ determination of Earth matter density in a neutrino factory
NASA Astrophysics Data System (ADS)
Minakata, Hisakazu; Uchinami, Shoichi
2007-04-01
We point out that an accurate in situ determination of the earth matter density ρ is possible in neutrino factory by placing a detector at the magic baseline, L=2π/GFNe where Ne denotes electron number density. The accuracy of matter density determination is excellent in a region of relatively large θ13 with fractional uncertainty δρ/ρ of about 0.43%, 1.3%, and ≲3% at 1σ CL at sin22θ13=0.1, 10-2, and 3×10-3, respectively. At smaller θ13 the uncertainty depends upon the CP phase δ, but it remains small, 3% 7% in more than 3/4 of the entire region of δ at sin22θ13=10-4. The results would allow us to solve the problem of obscured CP violation due to the uncertainty of earth matter density in a wide range of θ13 and δ. It may provide a test for the geophysical model of the earth, or it may serve as a method for a stringent test of the Mikheyev-Smirnov-Wolfenstein theory of neutrino propagation in matter once an accurate geophysical estimation of the matter density is available.
Invited Review Article: Measurements of the Newtonian constant of gravitation, G
NASA Astrophysics Data System (ADS)
Rothleitner, C.; Schlamminger, S.
2017-11-01
By many accounts, the Newtonian constant of gravitation G is the fundamental constant that is most difficult to measure accurately. Over the past three decades, more than a dozen precision measurements of this constant have been performed. However, the scatter of the data points is much larger than the uncertainties assigned to each individual measurement, yielding a Birge ratio of about five. Today, G is known with a relative standard uncertainty of 4.7 × 10-5, which is several orders of magnitudes greater than the relative uncertainties of other fundamental constants. In this article, various methods to measure G are discussed. A large array of different instruments ranging from the simple torsion balance to the sophisticated atom interferometer can be used to determine G. Some instruments, such as the torsion balance can be used in several different ways. In this article, the advantages and disadvantages of different instruments as well as different methods are discussed. A narrative arc from the historical beginnings of the different methods to their modern implementation is given. Finally, the article ends with a brief overview of the current state of the art and an outlook.
Assessing uncertainties in surface water security: An empirical multimodel approach
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
The mean density and two-point correlation function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1988-01-01
The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.
Wang, Shusen; Pan, Ming; Mu, Qiaozhen; ...
2015-07-29
Here, this study compares six evapotranspiration ET products for Canada's landmass, namely, eddy covariance EC measurements; surface water budget ET; remote sensing ET from MODIS; and land surface model (LSM) ET from the Community Land Model (CLM), the Ecological Assimilation of Land and Climate Observations (EALCO) model, and the Variable Infiltration Capacity model (VIC). The ET climatology over the Canadian landmass is characterized and the advantages and limitations of the datasets are discussed. The EC measurements have limited spatial coverage, making it difficult for model validations at the national scale. Water budget ET has the largest uncertainty because of datamore » quality issues with precipitation in mountainous regions and in the north. MODIS ET shows relatively large uncertainty in cold seasons and sparsely vegetated regions. The LSM products cover the entire landmass and exhibit small differences in ET among them. Annual ET from the LSMs ranges from small negative values to over 600 mm across the landmass, with a countrywide average of 256 ± 15 mm. Seasonally, the countrywide average monthly ET varies from a low of about 3 mm in four winter months (November-February) to 67 ± 7 mm in July. The ET uncertainty is scale dependent. Larger regions tend to have smaller uncertainties because of the offset of positive and negative biases within the region. More observation networks and better quality controls are critical to improving ET estimates. Future techniques should also consider a hybrid approach that integrates strengths of the various ET products to help reduce uncertainties in ET estimation.« less
Lutz, James A.; Matchett, John R.; Tarnay, Leland W.; Smith, Douglas F.; Becker, Kendall M.L.; Furniss, Tucker J.; Brooks, Matthew L.
2017-01-01
Fire is one of the principal agents changing forest carbon stocks and landscape level distributions of carbon, but few studies have addressed how accurate carbon accounting of fire-killed trees is or can be. We used a large number of forested plots (1646), detailed selection of species-specific and location-specific allometric equations, vegetation type maps with high levels of accuracy, and Monte Carlo simulation to model the amount and uncertainty of aboveground tree carbon present in tree species (hereafter, carbon) within Yosemite and Sequoia & Kings Canyon National Parks. We estimated aboveground carbon in trees within Yosemite National Park to be 25 Tg of carbon (C) (confidence interval (CI): 23–27 Tg C), and in Sequoia & Kings Canyon National Park to be 20 Tg C (CI: 18–21 Tg C). Low-severity and moderate-severity fire had little or no effect on the amount of carbon sequestered in trees at the landscape scale, and high-severity fire did not immediately consume much carbon. Although many of our data inputs were more accurate than those used in similar studies in other locations, the total uncertainty of carbon estimates was still greater than ±10%, mostly due to potential uncertainties in landscape-scale vegetation type mismatches and trees larger than the ranges of existing allometric equations. If carbon inventories are to be meaningfully used in policy, there is an urgent need for more accurate landscape classification methods, improvement in allometric equations for tree species, and better understanding of the uncertainties inherent in existing carbon accounting methods.
Uncertainty in Citizen Science observations: from measurement to user perception
NASA Astrophysics Data System (ADS)
Lahoz, William; Schneider, Philipp; Castell, Nuria
2016-04-01
Citizen Science activities concern general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. The advent of technologies such as the Internet and smartphones, and the growth in their usage, has significantly increased the potential benefits from Citizen Science activities. Citizen Science observations from low-cost sensors, smartphones and Citizen Observatories, provide a novel and recent development in platforms for observing the Earth System, with the opportunity to extend the range of observational platforms available to society to spatio-temporal scales (10-100s m; 1 hr or less) highly relevant to citizen needs. The potential value of Citizen Science is high, with applications in science, education, social aspects, and policy aspects, but this potential, particularly for citizens and policymakers, remains largely untapped. Key areas where Citizen Science data start to have demonstrable benefits include GEOSS Societal Benefit Areas such as Health and Weather. Citizen Science observations have many challenges, including simulation of smaller spatial scales, noisy data, combination with traditional observational methods (satellite and in situ data), and assessment, representation and visualization of uncertainty. Within these challenges, that of the assessment and representation of uncertainty and its communication to users is fundamental, as it provides qualitative and/or quantitative information that influences the belief users will have in environmental information. This presentation will discuss the challenges in assessment and representation of uncertainty in Citizen Science observations, its communication to users, including the use of visualization, and the perception of this uncertainty information by users of Citizen Science observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shusen; Pan, Ming; Mu, Qiaozhen
Here, this study compares six evapotranspiration ET products for Canada's landmass, namely, eddy covariance EC measurements; surface water budget ET; remote sensing ET from MODIS; and land surface model (LSM) ET from the Community Land Model (CLM), the Ecological Assimilation of Land and Climate Observations (EALCO) model, and the Variable Infiltration Capacity model (VIC). The ET climatology over the Canadian landmass is characterized and the advantages and limitations of the datasets are discussed. The EC measurements have limited spatial coverage, making it difficult for model validations at the national scale. Water budget ET has the largest uncertainty because of datamore » quality issues with precipitation in mountainous regions and in the north. MODIS ET shows relatively large uncertainty in cold seasons and sparsely vegetated regions. The LSM products cover the entire landmass and exhibit small differences in ET among them. Annual ET from the LSMs ranges from small negative values to over 600 mm across the landmass, with a countrywide average of 256 ± 15 mm. Seasonally, the countrywide average monthly ET varies from a low of about 3 mm in four winter months (November-February) to 67 ± 7 mm in July. The ET uncertainty is scale dependent. Larger regions tend to have smaller uncertainties because of the offset of positive and negative biases within the region. More observation networks and better quality controls are critical to improving ET estimates. Future techniques should also consider a hybrid approach that integrates strengths of the various ET products to help reduce uncertainties in ET estimation.« less
NASA Technical Reports Server (NTRS)
Morton, Douglas C.; Sales, Marcio H.; Souza, Carlos M., Jr.; Griscom, Bronson
2011-01-01
Historic carbon emissions are an important foundation for proposed efforts to Reduce Emissions from Deforestation and forest Degradation and enhance forest carbon stocks through conservation and sustainable forest management (REDD+). The level of uncertainty in historic carbon emissions estimates is also critical for REDD+, since high uncertainties could limit climate benefits from mitigation actions. Here, we analyzed source data uncertainties based on the range of available deforestation, forest degradation, and forest carbon stock estimates for the Brazilian state of Mato Grosso during 1990-2008. Results: Deforestation estimates showed good agreement for multi-year trends of increasing and decreasing deforestation during the study period. However, annual deforestation rates differed by >20% in more than half of the years between 1997-2008, even for products based on similar input data. Tier 2 estimates of average forest carbon stocks varied between 99-192 Mg C/ha, with greatest differences in northwest Mato Grosso. Carbon stocks in deforested areas increased over the study period, yet this increasing trend in deforested biomass was smaller than the difference among carbon stock datasets for these areas. Conclusions: Patterns of spatial and temporal disagreement among available data products provide a roadmap for future efforts to reduce source data uncertainties for estimates of historic forest carbon emissions. Specifically, regions with large discrepancies in available estimates of both deforestation and forest carbon stocks are priority areas for evaluating and improving existing estimates. Full carbon accounting for REDD+ will also require filling data gaps, including forest degradation and secondary forest, with annual data on all forest transitions.
Wells, R.E.; Simpson, R.W.
2001-01-01
Geologic and paleomagnetic data from the Cascadia forearc indicate long-term northward migration and clockwise rotation of an Oregon coastal block with respect to North America. Paleomagnetic rotation of coastal Oregon is linked by a Klamath Mountains pole to geodetically and geologically determined motion of the Sierra Nevada block to derive a new Oregon Coast-North America (OC-NA) pole of rotation and velocity field. This long-term velocity field, which is independent of Pacific Northwest GPS data, is interpreted to be the result of Basin-Range extension and Pacific-North America dextral shear. The resulting Oregon Coast pole compares favorably to those derived solely from GPS data, although uncertainties are large. Subtracting the long-term motion from forearc GPS velocities reveals ENE motion with respect to an OC reference frame that is parallel to the direction of Juan de Fuca-OC convergence and decreases inland. We interpret this to be largely the result of subduction-related deformation. The adjusted mean GPS velocities are generally subparallel to those predicted from elastic dislocation models for Cascadia, but more definitive interpretations await refinement of the present large uncertainty in the Sierra Nevada block motion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Steen, Valerie; Sofaer, Helen R.; Skagen, Susan K.; Ray, Andrea J.; Noon, Barry R
2017-01-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R
2017-11-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Identifying acne treatment uncertainties via a James Lind Alliance Priority Setting Partnership
Layton, Alison; Eady, E Anne; Peat, Maggie; Whitehouse, Heather; Levell, Nick; Ridd, Matthew; Cowdell, Fiona; Patel, Mahenda; Andrews, Stephen; Oxnard, Christine; Fenton, Mark; Firkins, Lester
2015-01-01
Objectives The Acne Priority Setting Partnership (PSP) was set up to identify and rank treatment uncertainties by bringing together people with acne, and professionals providing care within and beyond the National Health Service (NHS). Setting The UK with international participation. Participants Teenagers and adults with acne, parents, partners, nurses, clinicians, pharmacists, private practitioners. Methods Treatment uncertainties were collected via separate online harvesting surveys, embedded within the PSP website, for patients and professionals. A wide variety of approaches were used to promote the surveys to stakeholder groups with a particular emphasis on teenagers and young adults. Survey submissions were collated using keywords and verified as uncertainties by appraising existing evidence. The 30 most popular themes were ranked via weighted scores from an online vote. At a priority setting workshop, patients and professionals discussed the 18 highest-scoring questions from the vote, and reached consensus on the top 10. Results In the harvesting survey, 2310 people, including 652 professionals and 1456 patients (58% aged 24 y or younger), made submissions containing at least one research question. After checking for relevance and rephrasing, a total of 6255 questions were collated into themes. Valid votes ranking the 30 most common themes were obtained from 2807 participants. The top 10 uncertainties prioritised at the workshop were largely focused on management strategies, optimum use of common prescription medications and the role of non-drug based interventions. More female than male patients took part in the harvesting surveys and vote. A wider range of uncertainties were provided by patients compared to professionals. Conclusions Engaging teenagers and young adults in priority setting is achievable using a variety of promotional methods. The top 10 uncertainties reveal an extensive knowledge gap about widely used interventions and the relative merits of drug versus non-drug based treatments in acne management. PMID:26187120
Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking
NASA Astrophysics Data System (ADS)
Groves, D. G.; Lempert, R.
2008-12-01
Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Czarske, J.; Fischer, A.
2014-08-01
Optical measurement techniques offer great opportunities in diverse applications, such as lathe monitoring and microfluidics. Doppler-based interferometric techniques enable simultaneous measurement of the lateral velocity and axial distance of a moving object. However, there is a complementarity between the unambiguous axial measurement range and the uncertainty of the distance. Therefore, we present an extended sensor setup, which provides an unambiguous axial measurement range of 1 mm while achieving uncertainties below 100 nm. Measurements at a calibration system are performed. When using a pinhole for emulating a single scattering particle, the tumbling motion of the rotating object is resolved with a distance uncertainty of 50 nm. For measurements at the rough surface, the distance uncertainty amounts to 280 nm due to a lower signal-to-noise ratio. Both experimental results are close to the respective Cramér-Rao bound, which is derived analytically for both surface and single particle measurements.
NASA Astrophysics Data System (ADS)
Alkhorayef, M.; Mansour, A.; Sulieman, A.; Alnaaimi, M.; Alduaij, M.; Babikir, E.; Bradley, D. A.
2017-12-01
Butylatedhydroxytoluene (BHT) rods represent a potential dosimeter in radiation processing, with readout via electron paramagnetic resonance (EPR) spectroscopy. Among the possible sources of uncertainty are those associated with the performance of the dosimetric medium and the conditions under which measurements are made, including sampling and environmental conditions. Present study makes estimate of the uncertainties, investigating physical response in different resonance regions. BHT, a white crystalline solid with a melting point of between 70-73 °C, was investigated using 60Co gamma irradiation over the dose range 0.1-100 kGy. The intensity of the EPR signal increases linearly in the range 0.1-35 kGy, the uncertainty budget for high doses being 3.3% at the 2σ confidence level. The rod form represents an excellent alternative dosimeter for high level dosimetry, of small uncertainty compared to powder form.
Iuchi, Tohru; Gogami, Atsushi
2009-12-01
We have developed a user-friendly hybrid surface temperature sensor. The uncertainties of temperature readings associated with this sensor and a thermocouple embedded in a silicon wafer are compared. The expanded uncertainties (k=2) of the hybrid temperature sensor and the embedded thermocouple are 2.11 and 2.37 K, respectively, in the temperature range between 600 and 1000 K. In the present paper, the uncertainty evaluation and the sources of uncertainty are described.
Sources of Uncertainty in Modelling mid-Pliocene Arctic Amplification
NASA Astrophysics Data System (ADS)
Dolan, A. M.; Haywood, A.; Howell, F.; Prescott, C.; Pope, J. O.; Hill, D. J.; Voss, J.
2016-12-01
The mid-Pliocene Warm Period (mPWP) is an interval between 3.264 and 3.205 million years ago, which has globally warmer temperatures (Haywood et al., 2013) accompanied by levels of CO2 above pre-Industrial ( 400 ppmv; e.g. Bartoli et al. 2011; Badger et al., 2013). Arctic amplification of temperatures is a major characteristic of all proxy-based reconstructions of the mPWP in terms of both oceanic (Dowsett et al., 2010) and land warming (Salzmann et al., 2013). For example, evidence of fossilised forests in the Canadian high-Arctic show summer temperatures of up to 16°C warmer than present (Csank et al., 2010). Also, summer temperatures estimates based on pollen reconstructions at Lake El'gygytgyn in North East Russia are up to 6°C warmer than present day (Brigham-Grette et al., 2013). Nevertheless, results from the first phase of the Pliocene Model Intercomparison Project (PlioMIP) suggest that climate models may underestimate the degree of Arctic amplification suggested by proxy records (Haywood et al., 2013). Here we use a large ensemble of experiments performed with the HadCM3 climate model to explore relative sources of uncertainty in the simulations of Arctic amplification. Within this suite of over 150 simulations, we consider; (i) a range of mPWP-specific orbital configurations to quantify the influence of temporal variability, (ii) a range of CO2 scenarios to take into account uncertainties in this particular greenhouse gas forcing, (iii) a perturbed physics ensemble to investigate parametric uncertainty within the HadCM3 climate model, and also (iv) a number of experiments with altered palaeogeographies (including changes to topography and ice sheets) to assess the impact of different boundary condition realisations on our simulation of Arctic amplification. We also incorporate results from the PlioMIP project to allude to the effect of structural uncertainty on Arctic warming. Following methods used in Yoshimori et al. (2013) and Laine et al. (2016), we identify the largest sources of uncertainty over both the land and the ocean in simulating the degree of amplification suggested by available proxy data. We also relate predictions of Arctic amplification to key features within the model, for example, sea ice extent and seasonality.
Understanding extreme sea levels for coastal impact and adaptation analysis
NASA Astrophysics Data System (ADS)
Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Hinkel, J.; Dangendorf, S.; Slangen, A.
2016-12-01
Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels, because increasing damage due to extreme events, such as storm surges and tropical cyclones, is one of the major consequences of sea level rise and climate change. In fact, the IPCC has highlighted in its AR4 report that "societal impacts of sea level change primarily occur via the extreme levels rather than as a direct consequence of mean sea level changes". Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future mean sea level; different scenarios were developed with process-based or semi-empirical models and used for coastal impact assessments at various spatial scales to guide coastal management and adaptation efforts. The uncertainties in future sea level rise are typically accounted for by analyzing the impacts associated with a range of scenarios leading to a vertical displacement of the distribution of extreme sea-levels. And indeed most regional and global studies find little or no evidence for changes in storminess with climate change, although there is still low confidence in the results. However, and much more importantly, there is still a limited understanding of present-day extreme sea-levels which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of extreme sea-levels. The bias of these models varies spatially and can reach values much larger than the expected sea level rise; but it can be accounted for in most regions making use of in-situ measurements; (2) Statistical models used for determining present-day extreme sea-level exceedance probabilities. There is no universally accepted approach to obtain such values for flood risk assessments and while substantial research has explored inter-model uncertainties for mean sea level, we explore here, for the first time, inter-model uncertainties for extreme sea-levels at large spatial scales and compare them to the uncertainties in mean sea level projections.
Uncertainty in the Future of Seasonal Snowpack over North America.
NASA Astrophysics Data System (ADS)
McCrary, R. R.; Mearns, L.
2017-12-01
The uncertainty in future changes in seasonal snowpack (snow water equivalent, SWE) and snow cover extent (SCE) for North America are explored using the North American Regional Climate Change Assessment Program (NARCCAP) suite of regional climate models (RCMs) and their driving CMIP3 global circulation models (GCMs). The higher resolution of the NARCCAP RCMs is found to add significant value to the details of future projections of SWE in topographically complex regions such as the Pacific Northwest and the Rocky Mountains. The NARCCAP models also add detailed information regarding changes in the southernmost extent of snow cover. 11 of the 12 NARCCAP ensemble members contributed SWE output which we use to explore the uncertainty in future snowpack at higher resolution. In this study, we quantify the uncertainty in future projections by looking at the spread of the interquartile range of the different models. By mid-Century the RCMs consistently predict that winter SWE amounts will decrease over most of North America. The only exception to this is in Northern Canada, where increased moisture supply leads to increases in SWE in all but one of the RCMs. While the models generally agree on the sign of the change in SWE, there is considerable spread in the magnitude (absolute and percent) of the change. The RCMs also agree that the number of days with measureable snow on the ground is projected to decrease, with snow accumulation occurring later in the Fall/Winter and melting starting earlier in the Spring/Summer. As with SWE amount, spread across the models is large for changes in the timing of the snow season and can vary by over a month between models. While most of the NARCCAP models project a total loss of measurable snow along the southernmost edge of their historical range, there is considerable uncertainty about where this will occur within the ensemble due to the bias in snow cover extent in the historical simulations. We explore methods to increase our confidence about the regions that will lose any seasonal snow.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.
Estimation of organ and effective doses from newborn radiography of the chest and abdomen.
Ma, Hillgan; Elbakri, Idris A; Reed, Martin
2013-09-01
Neonatal intensive care patients undergo frequent chest and abdomen radiographic imaging. In this study, the organ doses and the effective dose resulting from combined chest-abdomen radiography of the newborn child are determined. These values are calculated using the Monte Carlo simulation software PCXCM 2.0 and compared with direct dose measurements obtained from thermoluminescent detectors (TLDs) in a physical phantom. The effective dose obtained from PCXMC is 21.2 ± 0.7 μSv and that obtained from TLD measurements is 22.0 ± 0.5 μSv. While the two methods are in close agreement with regard to the effective dose, there is a wide range of variation in organ doses, ranging from 85 % difference for the testes to 1.4 % for the lungs. Large organ dose variations are attributed to organs at the edge of the field of view, or organs with large experimental error or simulation uncertainty. This study suggests that PCXMC can be used to estimate organ and effective doses for newborn patients.
Model sensitivity studies of the decrease in atmospheric carbon tetrachloride
Chipperfield, Martyn P.; Liang, Qing; Rigby, Matthew; ...
2016-12-20
Carbon tetrachloride (CCl 4) is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. But, the current observed rate of this decrease is known to be slower than expected based on reported CCl 4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D) chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl 4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % ofmore » total), but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl 4 decay. This is partly due to the limiting effect of the rate of transport of CCl 4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total) is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total) has the largest impact on modelled CCl 4 decay due to its sizeable contribution to CCl 4 loss and large lifetime uncertainty range (147 to 241 years). Furthermore, with an assumed CCl 4 emission rate of 39 Gg year -1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl 4 (overestimates the decay) over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl 4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year -1. Further progress in constraining the CCl 4 budget is partly limited by systematic biases between observational datasets. For example, surface observations from the National Oceanic and Atmospheric Administration (NOAA) network are larger than from the Advanced Global Atmospheric Gases Experiment (AGAGE) network but have shown a steeper decreasing trend over the past 2 decades. The observed differences imply a difference in emissions which is significant relative to uncertainties in the magnitudes of the CCl 4 sinks.« less
Model sensitivity studies of the decrease in atmospheric carbon tetrachloride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chipperfield, Martyn P.; Liang, Qing; Rigby, Matthew
Carbon tetrachloride (CCl 4) is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. But, the current observed rate of this decrease is known to be slower than expected based on reported CCl 4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D) chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl 4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % ofmore » total), but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl 4 decay. This is partly due to the limiting effect of the rate of transport of CCl 4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total) is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total) has the largest impact on modelled CCl 4 decay due to its sizeable contribution to CCl 4 loss and large lifetime uncertainty range (147 to 241 years). Furthermore, with an assumed CCl 4 emission rate of 39 Gg year -1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl 4 (overestimates the decay) over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl 4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year -1. Further progress in constraining the CCl 4 budget is partly limited by systematic biases between observational datasets. For example, surface observations from the National Oceanic and Atmospheric Administration (NOAA) network are larger than from the Advanced Global Atmospheric Gases Experiment (AGAGE) network but have shown a steeper decreasing trend over the past 2 decades. The observed differences imply a difference in emissions which is significant relative to uncertainties in the magnitudes of the CCl 4 sinks.« less
Knotts, Thomas A.
2017-01-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455
Synergistic Measurement of Ice Cloud Microphysics using C- and Ka-Band Radars
NASA Astrophysics Data System (ADS)
Ewald, F.; Gross, S.; Hagen, M.; Li, Q.; Zinner, T.
2017-12-01
Ice clouds play an essential role in the climate system since they have a large effect on the Earth's radiation budget. Uncertainties associated with their spatial and temporal distribution as well as their optical and microphysical properties still account for large uncertainties in climate change predictions. Substantial improvement of our understanding of ice clouds was achieved with the advent of cloud radars into the field of ice cloud remote sensing. Here, highly variable ice crystal size distributions are one of the key issues remaining to be resolved. With radar reflectivity scaling with the sixth moment of the particle size, the assumed ice crystal size distribution has a large impact on the results of microphysical retrievals. Different ice crystal sizes distributions can, however, be distinguished, when cloud radars of different wavelength are used simultaneously.For this study, synchronous RHI scans were performed for a common measurement range of about 30 km between two radar instruments using different wavelengths: the dual-polarization C-band radar POLDIRAD operated at DLR and the Mira-36 Ka-band cloud radar operated at the University of Munich. For a measurement period over several months, the overlapping region for ice clouds turned out to be quite large. This gives evidence on the presence of moderate-sized ice crystals for which the backscatter is sufficient high to be visible in the C-band as well. In the range between -10 to +10 dBz, reflectivity measurements from both radars agreed quite well indicating the absence of large ice crystals. For reflectivities above +10 dBz, we observed differences with smaller values at the Ka-band due to Mie scattering effects at larger ice crystals.In this presentation, we will show how this differential reflectivity can be used to gain insight into ice cloud microphysics on the basis of electromagnetic scattering calculations. We will further explore ice cloud microphysics using the full polarization agility of the C-band radar and compare the results to simultaneous linear depolarization measurements with the Ka-band radar. In summary, we will explore if the scientific understanding of ice cloud microphysics can be advanced by the combination of C- and Ka-band radars.
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope abundance variations potentially are large enough to result in future expansion of their atomic weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope-abundance variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio. There are no internationally distributed isotopic reference materials for the elements zinc, selenium, molybdenum, palladium, and tellurium. Preparation of such materials will help to make isotope ratio measurements among laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans
2015-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Sensitivity of collective action to uncertainty about climate tipping points
NASA Astrophysics Data System (ADS)
Barrett, Scott; Dannenberg, Astrid
2014-01-01
Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
Micro-Pulse Lidar Signals: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)
2002-01-01
Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.
NASA Astrophysics Data System (ADS)
Croke, Jacky; Todd, Peter; Thompson, Chris; Watson, Fiona; Denham, Robert; Khanal, Giri
2013-02-01
Advances in remote sensing and digital terrain processing now allow for a sophisticated analysis of spatial and temporal changes in erosion and deposition. Digital elevation models (DEMs) can now be constructed and differenced to produce DEMs of Difference (DoD), which are used to assess net landscape change for morphological budgeting. To date this has been most effectively achieved in gravel-bed rivers over relatively small spatial scales. If the full potential of the technology is to be realised, additional studies are required at larger scales and across a wider range of geomorphic features. This study presents an assessment of the basin-scale spatial patterns of erosion, deposition, and net morphological change that resulted from a catastrophic flood event in the Lockyer Creek catchment of SE Queensland (SEQ) in January 2011. Multitemporal Light Detection and Ranging (LiDAR) DEMs were used to construct a DoD that was then combined with a one-dimensional flow hydraulic model HEC-RAS to delineate five major geomorphic landforms, including inner-channel area, within-channel benches, macrochannel banks, and floodplain. The LiDAR uncertainties were quantified and applied together with a probabilistic representation of uncertainty thresholded at a conservative 95% confidence interval. The elevation change distribution (ECD) for the 100-km2 study area indicates a magnitude of elevation change spanning almost 10 m but the mean elevation change of 0.04 m confirms that a large part of the landscape was characterised by relatively low magnitude changes over a large spatial area. Mean elevation changes varied by geomorphic feature and only two, the within-channel benches and macrochannel banks, were net erosional with an estimated combined loss of 1,815,149 m3 of sediment. The floodplain was the zone of major net deposition but mean elevation changes approached the defined critical limit of uncertainty. Areal and volumetric ECDs for this extreme event provide a representative expression of the balance between erosion and deposition, and importantly sediment redistribution, which is extremely difficult to quantify using more traditional channel planform or cross-sectional surveys. The ability of LiDAR to make a rapid and accurate assessment of key geomorphic processes over large spatial scales contributes to our understanding of key processes and, as demonstrated here, to the assessment of major geomorphological hazards such as extreme flood events.
NASA Astrophysics Data System (ADS)
Singh, R.; Wagener, T.; Crane, R.; Mann, M. E.; Ning, L.
2014-04-01
Large uncertainties in streamflow projections derived from downscaled climate projections of precipitation and temperature can render such simulations of limited value for decision making in the context of water resources management. New approaches are being sought to provide decision makers with robust information in the face of such large uncertainties. We present an alternative approach that starts with the stakeholder's definition of vulnerable ranges for relevant hydrologic indicators. Then the modeled system is analyzed to assess under what conditions these thresholds are exceeded. The space of possible climates and land use combinations for a watershed is explored to isolate subspaces that lead to vulnerability, while considering model parameter uncertainty in the analysis. We implement this concept using classification and regression trees (CART) that separate the input space of climate and land use change into those combinations that lead to vulnerability and those that do not. We test our method in a Pennsylvania watershed for nine ecological and water resources related streamflow indicators for which an increase in temperature between 3°C and 6°C and change in precipitation between -17% and 19% is projected. Our approach provides several new insights, for example, we show that even small decreases in precipitation (˜5%) combined with temperature increases greater than 2.5°C can push the mean annual runoff into a slightly vulnerable regime. Using this impact and stakeholder driven strategy, we explore the decision-relevant space more fully and provide information to the decision maker even if climate change projections are ambiguous.
NOTE: Ranges of ions in metals for use in particle treatment planning
NASA Astrophysics Data System (ADS)
Jäkel, Oliver
2006-05-01
In proton and ion radiotherapy, the range of particles is calculated from x-ray computed tomography (CT) numbers. Due to the strong absorption of x-rays in a metal and a cut-off for large Hounsfield units (HU) in the software of most CT-scanners, a range calculation in metals cannot be based on the measured HU. This is of special importance when metal implants such as gold fillings or hip prostheses are close to the treatment volume. In order to overcome this problem in treatment planning for heavy charged particles, the correct ranges of ions in the metal relative to water have to be assigned in the CT data. Measurements and calculations of carbon ion ranges in various metals are presented that can be used in treatment planning to allow for a more accurate range calculation of carbon ion beams in titanium, steel, tungsten and gold. The suggested values for the relative water-equivalent range and their uncertainties are 3.13 (±3%) for titanium, 5.59 (±3%) for stainless steel and 10.25 (±4%) for gold.
The Austrian radiation monitoring network ARAD - best practice and added value
NASA Astrophysics Data System (ADS)
Olefs, Marc; Baumgartner, Dietmar J.; Obleitner, Friedrich; Bichler, Christoph; Foelsche, Ulrich; Pietsch, Helga; Rieder, Harald E.; Weihs, Philipp; Geyer, Florian; Haiden, Thomas; Schöner, Wolfgang
2016-04-01
The Austrian RADiation monitoring network (ARAD) has been established to advance the national climate monitoring and to support satellite retrieval, atmospheric modeling and the development of solar energy techniques. Measurements cover the downward solar and thermal infrared radiation using instruments according to Baseline Surface Radiation Network (BSRN) standards. A unique feature of ARAD is its vertical dimension of five stations, covering an altitude range between about 200 m a.s.l (Vienna) and 3100 m a.s.l. (BSRN site Sonnblick). The paper outlines the aims and scopes of ARAD, its measurement and calibration standards, methods, strategies and station locations. ARAD network operation uses innovative data processing for quality assurance and quality control, utilizing manual and automated control algorithms. A combined uncertainty estimate for the broadband shortwave radiation fluxes at all five ARAD stations, using the methodology specified by the Guide to the Expression of Uncertainty in Measurement indicates that relative accuracies range from 1.5 to 2.9 % for large signals (global, direct: 1000 W m-2, diffuse: 500 W m-2) and from 1.7 to 23 % (or 0.9 to 11.5 W m-2) for small signals (50 W m-2) (expanded uncertainties corresponding to the 95 % confidence level). If the directional response error of the pyranometers and the temperature response of the instruments and the data acquisition system (DAQ) are corrected, this expanded uncertainty reduces to 1.4 to 2.8 % for large signals and to 1.7 to 5.2 % (or 0.9-2.6 W m-2) for small signals. Thus, for large signals of global and diffuse radiation, BSRN target accuracies are met or nearly met (missed by less than 0.2 percentage points, pps) for 70 % of the ARAD measurements after this correction. For small signals of direct radiation, BSRN targets are achieved at two sites and nearly met (also missed by less than 0.2 pps) at the other sites. For small signals of global and diffuse radiation, targets are achieved at all stations. Additional accuracy gains can be achieved in the future through additional measurements, corrections and a further upgrade of the DAQ. However, to improve the accuracy of measurements of direct solar radiation, improved instrument accuracy is needed. ARAD could serve as a useful example for establishing state-of-the-art radiation monitoring at the national level with a multiple-purpose approach. Instrumentation, guidelines and tools (such as the data quality control) developed within ARAD are intended to increase monitoring capabilities of global radiation and thus designed to allow straightforward adoption in other regions, without high development costs.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.
Dark Energy Survey Year 1 Results: galaxy mock catalogues for BAO
NASA Astrophysics Data System (ADS)
Avila, S.; Crocce, M.; Ross, A. J.; García-Bellido, J.; Percival, W. J.; Banik, N.; Camacho, H.; Kokron, N.; Chan, K. C.; Andrade-Oliveira, F.; Gomes, R.; Gomes, D.; Lima, M.; Rosenfeld, R.; Salvador, A. I.; Friedrich, O.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Cunha, C. E.; da Costa, L. N.; Davis, C.; De Vicente, J.; Doel, P.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Hartley, W. G.; Hollowood, D.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Miquel, R.; Plazas, A. A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.; Dark Energy Survey Collaboration
2018-05-01
Mock catalogues are a crucial tool in the analysis of galaxy surveys data, both for the accurate computation of covariance matrices, and for the optimisation of analysis methodology and validation of data sets. In this paper, we present a set of 1800 galaxy mock catalogues designed to match the Dark Energy Survey Year-1 BAO sample (Crocce et al. 2017) in abundance, observational volume, redshift distribution and uncertainty, and redshift dependent clustering. The simulated samples were built upon HALOGEN (Avila et al. 2015) halo catalogues, based on a 2LPT density field with an empirical halo bias. For each of them, a lightcone is constructed by the superposition of snapshots in the redshift range 0.45 < z < 1.4. Uncertainties introduced by so-called photometric redshifts estimators were modelled with a double-skewed-Gaussian curve fitted to the data. We populate halos with galaxies by introducing a hybrid Halo Occupation Distribution - Halo Abundance Matching model with two free parameters. These are adjusted to achieve a galaxy bias evolution b(zph) that matches the data at the 1-σ level in the range 0.6 < zph < 1.0. We further analyse the galaxy mock catalogues and compare their clustering to the data using the angular correlation function w(θ), the comoving transverse separation clustering ξμ < 0.8(s⊥) and the angular power spectrum Cℓ, finding them in agreement. This is the first large set of three-dimensional {ra,dec,z} galaxy mock catalogues able to simultaneously accurately reproduce the photometric redshift uncertainties and the galaxy clustering.
Which climate change path are we following? Bad news from Scots pine
D’Andrea, Ettore; Rezaie, Negar; Cammarano, Mario; Matteucci, Giorgio
2017-01-01
Current expectations on future climate derive from coordinated experiments, which compile many climate models for sampling the entire uncertainty related to emission scenarios, initial conditions, and modelling process. Quantifying this uncertainty is important for taking decisions that are robust under a wide range of possible future conditions. Nevertheless, if uncertainty is too large, it can prevent from planning specific and effective measures. For this reason, reducing the spectrum of the possible scenarios to a small number of one or a few models that actually represent the climate pathway influencing natural ecosystems would substantially increase our planning capacity. Here we adopt a multidisciplinary approach based on the comparison of observed and expected spatial patterns of response to climate change in order to identify which specific models, among those included in the CMIP5, catch the real climate variation driving the response of natural ecosystems. We used dendrochronological analyses for determining the geographic pattern of recent growth trends for three European species of trees. At the same time, we modelled the climatic niche for the same species and forecasted the suitability variation expected across Europe under each different GCM. Finally, we estimated how well each GCM explains the real response of ecosystems, by comparing the expected variation with the observed growth trends. Doing this, we identified four climatic models that are coherent with the observed trends. These models are close to the highest range limit of the climatic variations expected by the ensemble of the CMIP5 models, suggesting that current predictions of climate change impacts on ecosystems could be underestimated. PMID:29252985
Which climate change path are we following? Bad news from Scots pine.
Bombi, Pierluigi; D'Andrea, Ettore; Rezaie, Negar; Cammarano, Mario; Matteucci, Giorgio
2017-01-01
Current expectations on future climate derive from coordinated experiments, which compile many climate models for sampling the entire uncertainty related to emission scenarios, initial conditions, and modelling process. Quantifying this uncertainty is important for taking decisions that are robust under a wide range of possible future conditions. Nevertheless, if uncertainty is too large, it can prevent from planning specific and effective measures. For this reason, reducing the spectrum of the possible scenarios to a small number of one or a few models that actually represent the climate pathway influencing natural ecosystems would substantially increase our planning capacity. Here we adopt a multidisciplinary approach based on the comparison of observed and expected spatial patterns of response to climate change in order to identify which specific models, among those included in the CMIP5, catch the real climate variation driving the response of natural ecosystems. We used dendrochronological analyses for determining the geographic pattern of recent growth trends for three European species of trees. At the same time, we modelled the climatic niche for the same species and forecasted the suitability variation expected across Europe under each different GCM. Finally, we estimated how well each GCM explains the real response of ecosystems, by comparing the expected variation with the observed growth trends. Doing this, we identified four climatic models that are coherent with the observed trends. These models are close to the highest range limit of the climatic variations expected by the ensemble of the CMIP5 models, suggesting that current predictions of climate change impacts on ecosystems could be underestimated.
NASA Astrophysics Data System (ADS)
Heinonen, Martti; Anagnostou, Miltiadis; Bell, Stephanie; Stevens, Mark; Benyon, Robert; Bergerud, Reidun Anita; Bojkovski, Jovan; Bosma, Rien; Nielsen, Jan; Böse, Norbert; Cromwell, Plunkett; Kartal Dogan, Aliye; Aytekin, Seda; Uytun, Ali; Fernicola, Vito; Flakiewicz, Krzysztof; Blanquart, Bertrand; Hudoklin, Domen; Jacobson, Per; Kentved, Anders; Lóio, Isabel; Mamontov, George; Masarykova, Alexandra; Mitter, Helmut; Mnguni, Regina; Otych, Jan; Steiner, Anton; Szilágyi Zsófia, Nagyné; Zvizdic, Davor
2012-09-01
In the field of humidity quantities, the first CIPM key comparison, CCT-K6 is at its end. The corresponding European regional key comparison, EUROMET.T-K6, was completed in early 2008, about 4 years after the starting initial measurements in the project. In total, 24 NMIs from different countries took part in the comparison. This number includes 22 EURAMET countries, and Russia and South Africa. The comparison covered the dew-point temperature range from -50 °C to +20 °C. It was carried out in three parallel loops, each with two chilled mirror hygrometers as transfer standards in each loop. The comparison scheme was designed to ensure high quality results with evenly spread workload for the participants. It is shown that the standard uncertainty due to the long-term instability was smaller than 0.008 °C in all loops. The standard uncertainties due to links between the loops were found to be smaller than 0.025 °C at -50 °C and 0.010 °C elsewhere. Conclusions on the equivalence of the dew-point temperature standards are drawn on the basis of calculated bilateral degrees of equivalence and deviations from the EURAMET comparison reference values (ERV). Taking into account 16 different primary dew-point realizations and 8 secondary realizations, the results demonstrate the equivalence of a large number of laboratories at an uncertainty level that is better than achieved in other multilateral comparisons so far in the humidity field.
Regional scaling of annual mean precipitation and water availability with global temperature change
NASA Astrophysics Data System (ADS)
Greve, Peter; Gudmundsson, Lukas; Seneviratne, Sonia I.
2018-03-01
Changes in regional water availability belong to the most crucial potential impacts of anthropogenic climate change, but are highly uncertain. It is thus of key importance for stakeholders to assess the possible implications of different global temperature thresholds on these quantities. Using a subset of climate model simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5), we derive here the sensitivity of regional changes in precipitation and in precipitation minus evapotranspiration to global temperature changes. The simulations span the full range of available emission scenarios, and the sensitivities are derived using a modified pattern scaling approach. The applied approach assumes linear relationships on global temperature changes while thoroughly addressing associated uncertainties via resampling methods. This allows us to assess the full distribution of the simulations in a probabilistic sense. Northern high-latitude regions display robust responses towards wetting, while subtropical regions display a tendency towards drying but with a large range of responses. Even though both internal variability and the scenario choice play an important role in the overall spread of the simulations, the uncertainty stemming from the climate model choice usually accounts for about half of the total uncertainty in most regions. We additionally assess the implications of limiting global mean temperature warming to values below (i) 2 K or (ii) 1.5 K (as stated within the 2015 Paris Agreement). We show that opting for the 1.5 K target might just slightly influence the mean response, but could substantially reduce the risk of experiencing extreme changes in regional water availability.
An estimated cost of lost climate regulation services caused by thawing of the Arctic cryosphere.
Euskirchen, Eugénie S; Goodstein, Eban S; Huntington, Henry P
2013-12-01
Recent and expected changes in Arctic sea ice cover, snow cover, and methane emissions from permafrost thaw are likely to result in large positive feedbacks to climate warming. There is little recognition of the significant loss in economic value that the disappearance of Arctic sea ice, snow, and permafrost will impose on humans. Here, we examine how sea ice and snow cover, as well as methane emissions due to changes in permafrost, may potentially change in the future, to year 2100, and how these changes may feed back to influence the climate. Between 2010 and 2100, the annual costs from the extra warming due to a decline in albedo related to losses of sea ice and snow, plus each year's methane emissions, cumulate to a present value cost to society ranging from US$7.5 trillion to US$91.3 trillion. The estimated range reflects uncertainty associated with (1) the extent of warming-driven positive climate feedbacks from the thawing cryosphere and (2) the expected economic damages per metric ton of CO2 equivalents that will be imposed by added warming, which depend, especially, on the choice of discount rate. The economic uncertainty is much larger than the uncertainty in possible future feedback effects. Nonetheless, the frozen Arctic provides immense services to all nations by cooling the earth's temperature: the cryosphere is an air conditioner for the planet. As the Arctic thaws, this critical, climate-stabilizing ecosystem service is being lost. This paper provides a first attempt to monetize the cost of some of those lost services.
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Mapping (dis)agreement in hydrologic projections
NASA Astrophysics Data System (ADS)
Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.
2018-03-01
Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.
Multivariate Meta-Analysis of Preference-Based Quality of Life Values in Coronary Heart Disease.
Stevanović, Jelena; Pechlivanoglou, Petros; Kampinga, Marthe A; Krabbe, Paul F M; Postma, Maarten J
2016-01-01
There are numerous health-related quality of life (HRQol) measurements used in coronary heart disease (CHD) in the literature. However, only values assessed with preference-based instruments can be directly applied in a cost-utility analysis (CUA). To summarize and synthesize instrument-specific preference-based values in CHD and the underlying disease-subgroups, stable angina and post-acute coronary syndrome (post-ACS), for developed countries, while accounting for study-level characteristics, and within- and between-study correlation. A systematic review was conducted to identify studies reporting preference-based values in CHD. A multivariate meta-analysis was applied to synthesize the HRQoL values. Meta-regression analyses examined the effect of study level covariates age, publication year, prevalence of diabetes and gender. A total of 40 studies providing preference-based values were detected. Synthesized estimates of HRQoL in post-ACS ranged from 0.64 (Quality of Well-Being) to 0.92 (EuroQol European"tariff"), while in stable angina they ranged from 0.64 (Short form 6D) to 0.89 (Standard Gamble). Similar findings were observed in estimates applying to general CHD. No significant improvement in model fit was found after adjusting for study-level covariates. Large between-study heterogeneity was observed in all the models investigated. The main finding of our study is the presence of large heterogeneity both within and between instrument-specific HRQoL values. Current economic models in CHD ignore this between-study heterogeneity. Multivariate meta-analysis can quantify this heterogeneity and offers the means for uncertainty around HRQoL values to be translated to uncertainty in CUAs.
Advanced Small Modular Reactor Economics Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.
2014-10-01
This report describes the data collection work performed for an advanced small modular reactor (AdvSMR) economics analysis activity at the Oak Ridge National Laboratory. The methodology development and analytical results are described in separate, stand-alone documents as listed in the references. The economics analysis effort for the AdvSMR program combines the technical and fuel cycle aspects of advanced (non-light water reactor [LWR]) reactors with the market and production aspects of SMRs. This requires the collection, analysis, and synthesis of multiple unrelated and potentially high-uncertainty data sets from a wide range of data sources. Further, the nature of both economic andmore » nuclear technology analysis requires at least a minor attempt at prediction and prognostication, and the far-term horizon for deployment of advanced nuclear systems introduces more uncertainty. Energy market uncertainty, especially the electricity market, is the result of the integration of commodity prices, demand fluctuation, and generation competition, as easily seen in deregulated markets. Depending on current or projected values for any of these factors, the economic attractiveness of any power plant construction project can change yearly or quarterly. For long-lead construction projects such as nuclear power plants, this uncertainty generates an implied and inherent risk for potential nuclear power plant owners and operators. The uncertainty in nuclear reactor and fuel cycle costs is in some respects better understood and quantified than the energy market uncertainty. The LWR-based fuel cycle has a long commercial history to use as its basis for cost estimation, and the current activities in LWR construction provide a reliable baseline for estimates for similar efforts. However, for advanced systems, the estimates and their associated uncertainties are based on forward-looking assumptions for performance after the system has been built and has achieved commercial operation. Advanced fuel materials and fabrication costs have large uncertainties based on complexities of operation, such as contact-handled fuel fabrication versus remote handling, or commodity availability. Thus, this analytical work makes a good faith effort to quantify uncertainties and provide qualifiers, caveats, and explanations for the sources of these uncertainties. The overall result is that this work assembles the necessary information and establishes the foundation for future analyses using more precise data as nuclear technology advances.« less
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
NASA Astrophysics Data System (ADS)
Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos
2016-09-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.
Yao, Shuai-Lei; Luo, Jing-Jia; Huang, Gang
2016-01-01
Regional climate projections are challenging because of large uncertainty particularly stemming from unpredictable, internal variability of the climate system. Here, we examine the internal variability-induced uncertainty in precipitation and surface air temperature (SAT) trends during 2005-2055 over East Asia based on 40 member ensemble projections of the Community Climate System Model Version 3 (CCSM3). The model ensembles are generated from a suite of different atmospheric initial conditions using the same SRES A1B greenhouse gas scenario. We find that projected precipitation trends are subject to considerably larger internal uncertainty and hence have lower confidence, compared to the projected SAT trends in both the boreal winter and summer. Projected SAT trends in winter have relatively higher uncertainty than those in summer. Besides, the lower-level atmospheric circulation has larger uncertainty than that in the mid-level. Based on k-means cluster analysis, we demonstrate that a substantial portion of internally-induced precipitation and SAT trends arises from internal large-scale atmospheric circulation variability. These results highlight the importance of internal climate variability in affecting regional climate projections on multi-decadal timescales.
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
Middleton, Richard Stephen; Yaw, Sean Patrick
2018-01-11
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
Implementation uncertainty when using recreational hunting to manage carnivores
Bischof, Richard; Nilsen, Erlend B; Brøseth, Henrik; Männil, Peep; Ozoliņš, Jaānis; Linnell, John D C; Bode, Michael
2012-01-01
1. Wildlife managers often rely on resource users, such as recreational or commercial hunters, to achieve management goals. The use of hunters to control wildlife populations is especially common for predators and ungulates, but managers cannot assume that hunters will always fill annual quotas set by the authorities. It has been advocated that resource management models should account for uncertainty in how harvest rules are realized, requiring that this implementation uncertainty be estimated. 2. We used a survival analysis framework and long-term harvest data from large carnivore management systems in three countries (Estonia, Latvia and Norway) involving four species (brown bear, grey wolf, Eurasian lynx and wolverine) to estimate the performance of hunters with respect to harvest goals set by managers. 3. Variation in hunter quota-filling performance was substantial, ranging from 40% for wolverine in Norway to nearly 100% for lynx in Latvia. Seasonal and regional variation was also high within country–species pairs. We detected a positive relationship between the instantaneous potential to fill a quota slot and the relative availability of the target species for both wolverine and lynx in Norway. 4. Survivor curves and hazards – with survival time measured as the time from the start of a season until a quota slot is filled – can indicate the extent to which managers can influence harvest through adjustments of season duration and quota limits. 5. Synthesis and applications. We investigated seven systems where authorities use recreational hunting to manage large carnivore populations. The variation and magnitude of deviation from harvest goals was substantial, underlining the need to incorporate implementation uncertainty into resource management models and decisions-making. We illustrate how survival analysis can be used by managers to estimate the performance of resource users with respect to achieving harvest goals set by managers. The findings in this study come at an opportune time given the growing popularity of management strategy evaluation (MSE) models in fisheries and a push towards incorporating MSE into terrestrial harvest management. PMID:23197878
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen; Yaw, Sean Patrick
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
Using heat as a tracer to estimate spatially distributed mean residence times in the hyporheic zone
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Pohll, G. M.; Stone, M. C.; Niswonger, R. G.; McKay, W. A.
2013-12-01
Biogeochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes are in contact with riverbed sediments. In this investigation, we developed a two-dimensional longitudinal flow and solute transport model to estimate the spatial distribution of mean residence time in the hyporheic zone along a riffle-pool sequence to gain a better understanding of nitrogen reactions. A flow and transport model was developed to estimate spatially distributed mean residence times and was calibrated using observations of temperature and pressure. The approach used in this investigation accounts for the mixing of ages given advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte-Carlo analysis and the generalized likelihood uncertainty estimation method. Results of parameter estimation indicate the presence of a low-permeable zone in the riffle area that induced horizontal flow at shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. From the optimal model, mean residence times were found to be relatively long (9 - 40 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean interquartile range of 2.2- and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence. It is demonstrated that spatially variable mean residence time beneath a riffle-pool system does not conform to simple conceptual models of hyporheic flow through a riffle-pool sequence. Rather, the mixing behavior between the river and the hyporheic flow are largely controlled by layered heterogeneity and anisotropy of the subsurface.
Chapter 8: Uncertainty assessment for quantifying greenhouse gas sources and sinks
Jay Breidt; Stephen M. Ogle; Wendy Powers; Coeli Hoover
2014-01-01
Quantifying the uncertainty of greenhouse gas (GHG) emissions and reductions from agriculture and forestry practices is an important aspect of decision�]making for farmers, ranchers and forest landowners as the uncertainty range for each GHG estimate communicates our level of confidence that the estimate reflects the actual balance of GHG exchange between...
Uncertainties in key elements of emissions and meteorology inputs to air quality models (AQMs) can range from 50 to 100% with some areas of emissions uncertainty even higher (Russell and Dennis, 2000). Uncertainties in the chemical mechanisms are thought to be smaller (Russell an...
Analyzing extreme sea levels for broad-scale impact and adaptation studies
NASA Astrophysics Data System (ADS)
Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.
2017-12-01
Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias. Finally, ESL uncertainties need to be integrated with SLR uncertainties. Otherwise, important improvements in providing more robust SLR projections are of less benefit for broad-scale impact and adaptation studies and decision processes.
NASA Astrophysics Data System (ADS)
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is typical of the NDACC temperature lidars transmitting at 355 nm. The combined temperature uncertainty ranges between 0.1 and 1 K below 60 km, with detection noise, saturation correction, and molecular extinction correction being the three dominant sources of uncertainty. Above 60 km and up to 10 km below the top of the profile, the total uncertainty increases exponentially from 1 to 10 K due to the combined effect of random noise and temperature tie-on. In the top 10 km of the profile, the accuracy of the profile mainly depends on that of the tie-on temperature. All other uncertainty components remain below 0.1 K throughout the entire profile (15-90 km), except the background noise correction uncertainty, which peaks around 0.3-0.5 K. It should be kept in mind that these quantitative estimates may be very different for other lidar instruments, depending on their altitude range and the wavelengths used.
NASA Astrophysics Data System (ADS)
Yoo, Seung Hoon; Son, Jae Man; Yoon, Myonggeun; Park, Sung Yong; Shin, Dongho; Min, Byung Jun
2018-06-01
A moving phantom is manufactured for mimicking lung model to study the dose uncertainty from CT number-stopping power conversion and dose calculation in the soft tissue, light lung tissue and bone regions during passive proton irradiation with compensator smearing value. The phantom is scanned with a CT system, and a proton beam irradiation plan is carried out with the use of a treatment planning system (Eclipse). In the case of the moving phantom, a RPM system is used for respiratory gating. The uncertainties in the dose distribution between the measured data and the planned data are investigated by a gamma analysis with 3%-3 mm acceptance criteria. To investigate smearing effect, three smearing values (0.3 cm, 0.7 cm, 1.2 cm) are used to for fixed and moving phantom system. For both fixed and moving phantom, uncertainties in the light lung tissue are severe than those in soft tissue region in which the dose uncertainties are within clinically tolerable ranges. As the smearing value increases, the uncertainty in the proton dose distribution decreases.
Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza
2018-03-26
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.
NASA Astrophysics Data System (ADS)
Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antiči'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, R.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.
2013-01-01
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 1018 eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above 1018 eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.
NASA Astrophysics Data System (ADS)
Bobovnik, G.; Kutin, J.; Bajsić, I.
2016-08-01
This paper deals with an uncertainty analysis of gas flow measurements using a compact, high-speed, clearance-sealed realization of a piston prover. A detailed methodology for the uncertainty analysis, covering the components due to the gas density, dimensional and time measurements, the leakage flow, the density correction factor and the repeatability, is presented. The paper also deals with the selection of the isothermal and adiabatic measurement models, the treatment of the leakage flow and discusses the need for averaging multiple consecutive readings of the piston prover. The analysis is prepared for the flow range (50 000:1) covered by the three interchangeable flow cells. The results show that using the adiabatic measurement model and averaging the multiple readings, the estimated expanded measurement uncertainty of the gas mass flow rate is less than 0.15% in the flow range above 0.012 g min-1, whereas it increases for lower mass flow rates due to the leakage flow related effects. At the upper end of the measuring range, using the adiabatic instead of the isothermal measurement model, as well as averaging multiple readings, proves important.
Extending medium-range predictability of extreme hydrological events in Europe
Lavers, David A.; Pappenberger, Florian; Zsoter, Ervin
2014-01-01
Widespread flooding occurred across northwest Europe during the winter of 2013/14, resulting in large socioeconomic damages. In the historical record, extreme hydrological events have been connected with intense water vapour transport. Here we show that water vapour transport has higher medium-range predictability compared with precipitation in the winter 2013/14 forecasts from the European Centre for Medium-Range Weather Forecasts. Applying the concept of potential predictability, the transport is found to extend the forecast horizon by 3 days in some European regions. Our results suggest that the breakdown in precipitation predictability is due to uncertainty in the horizontal mass convergence location, an essential mechanism for precipitation generation. Furthermore, the predictability increases with larger spatial averages. Given the strong association between precipitation and water vapour transport, especially for extreme events, we conclude that the higher transport predictability could be used as a model diagnostic to increase preparedness for extreme hydrological events. PMID:25387309
NASA Astrophysics Data System (ADS)
Anderegg, L. D. L.; Hillerislambers, J.
2016-12-01
Accurate prediction of climatically-driven range shifts requires knowledge of the dominant forces constraining species ranges, because climatically controlled range boundaries will likely behave differently from biotically controlled range boundaries in a changing climate. Yet the roles of climatic constraints (due to species physiological tolerance) versus biotic constraints (caused by species interactions) on geographic ranges are largely unknown, infusing large uncertainty into projections of future range shifts. Plant species ranges across strong climatic gradients such as elevation gradients are often assumed to represent a tradeoff between climatic constraints on the harsh side of the range and biotic constraints (often competitive constraints) on the climatically benign side. To test this assumption, we collected tree cores from across the elevational range of the three dominant tree species inhabiting each of three climatically disparate mountain slopes and assessed climatic versus competitive constraints on growth at each species' range margins. Across all species and mountains, we found evidence for a tradeoff between climatic and competitve growth constraints. We also found that some individual species did show an apparent trade-off between a climatic constraint at one range margin and a competitive constraint at the other. However, even these simple elevation gradients resulted in complex interactions between temperature, moisture, and competitive constraints such that a climate-competition tradeoff did not explain range constraints for many species. Our results suggest that tree species can be constrained by a simple trade-off between climate and competition, but that the intricacies of real world climate gradients complicate the application of this theory even in apparently harsh environments, such as near high elevation tree line.
NASA Astrophysics Data System (ADS)
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.
2012-09-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon cycle range. These high end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real world climate sensitivity constraints which, if achieved, would lead to reductions on the uppper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present day observables and future changes while the large spread of future projected changes, highlights the ongoing need for such work.
Viewer. Reaction Q-Values and Thresholds This tool computes reaction Q-values and thresholds using , uncertainties, and correlations using 30 energy ranges. Simple tables of reaction uncertainties are also
Forecasting the Cumulative Impacts of Dams on the Mekong Delta: Certainties and Uncertainties
NASA Astrophysics Data System (ADS)
Kondolf, G. M.; Rubin, Z.; Schmitt, R. J. P.
2016-12-01
The Mekong River basin is undergoing rapid hydroelectric development, with 7 large mainstem dams on the upper Mekong (Lancang) River in China and 133 dams planned for the Lower Mekong River basin (Laos, Cambodia, Thailand, Vietnam), 11 of which are on the mainstem. Prior analyses have shown that all these dams built as initially proposed would trap 96% of the natural sediment load to the Mekong Delta. Such a reduction in sediment supply would compromise the sustainability of the delta itself, but there are many uncertainties in the timing and pattern of land loss. The river will first erode in-channel sediment deposits, partly compensating for upstream sediment trapping until these deposits are exhausted. Other complicating factors include basin-wide accelerated land-use change, road construction, instream sand mining, dyking-off floodplains, and changing climate, accelerated subsidence from groundwater extraction, and sea level rise. It is certain that the Mekong Delta will undergo large changes in the coming decades, changes that will threaten its very existence. However, the multiplicity of compounding drivers and lack of good data lead to large uncertainties in forecasting changes in the sediment balance on the scale of a very large network. We quantify uncertainties in available data and consider changes due to additional, poorly quantified drivers (e.g., road construction), putting these drivers in perspective with the overall sediment budget. We developed a set of most-likely scenarios and their implications for the delta's future. Uncertainties are large, but there are certainties about the delta's future. If its sediment supply is nearly completely cut off (as would be the case with `business-as-usual' ongoing dam construction and sediment extraction), the Delta is certainly doomed to disappear in the face of rising seas, subsidence, and coastal erosion. The uncertainty is only when and how precisely the loss will progress.
Local Analysis Approach for Short Wavelength Geopotential Variations
NASA Astrophysics Data System (ADS)
Bender, P. L.
2009-12-01
The value of global spherical harmonic analyses for determining 15 day to 30 day changes in the Earth's gravity field has been demonstrated extensively using data from the GRACE mission and previous missions. However, additional useful information appears to be obtainable from local analyses of the data. A number of such analyses have been carried out by various groups. In the energy approximation, the changes in the height of the satellite altitude geopotential can be determined from the post-fit changes in the satellite separation during individual one-revolution arcs of data from a GRACE-type pair of satellites in a given orbit. For a particular region, it is assumed that short wavelength spatial variations for the arcs crossing that region during a time T of interest would be used to determine corrections to the spherical harmonic results. The main issue in considering higher measurement accuracy in future missions is how much improvement in spatial resolution can be achieved. For this, the shortest wavelengths that can be determined are the most important. And, while the longer wavelength variations are affected by mass distribution changes over much of the globe, the shorter wavelength ones hopefully will be determined mainly by more local changes in the mass distribution. Future missions are expected to have much higher accuracy for measuring changes in the satellite separation than GRACE. However, how large an improvement in the derived results in hydrology will be achieved is still very much a matter of study, particularly because of the effects of uncertainty in the time variations in the atmospheric and oceanic mass distributions. To be specific, it will be assumed that improving the spatial resolution in continental regions away from the coastlines is the objective, and that the satellite altitude is in the range of roughly 290 to 360 km made possible for long missions by drag-free operation. The advantages of putting together the short wavelength results from different arcs crossing the region can be seen most easily for an orbit with moderate inclination, such as 50 to 65 deg., so that the crossing angle between south-to-north (S-N) and N-S passes is fairly large over most regions well away from the poles. In that case, after filtering to pass the shorter wavelengths, the results for a given time interval can be combined to give the short wavelength W-E variations in the geopotential efficiently. For continents with extensive meteorological measurements available, like Europe and North America, a very rough guess at the surface mass density variation uncertainties is about 3 kg/m^2. This is based on the apparent accuracy of carefully calibrated surface pressure measurements. If a substantial part of the resulting uncertainties in the geopotential height at satellite altitude are at wavelengths less than about 1,500 km, they will dominate the measurement uncertainty at short spatial wavelengths for a GRACE-type mission with laser interferometry. This would be the case, even if the uncertainty in the atmospheric and oceanic mass distribution at large distances has a fairly small effect. However, the geopotential accuracy would still be substantially better than for the results achievable with a microwave ranging system.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Hare, Jonathan A.; Wuenschel, Mark J.; Kimball, Matthew E.
2012-01-01
We couple a species range limit hypothesis with the output of an ensemble of general circulation models to project the poleward range limit of gray snapper. Using laboratory-derived thermal limits and statistical downscaling from IPCC AR4 general circulation models, we project that gray snapper will shift northwards; the magnitude of this shift is dependent on the magnitude of climate change. We also evaluate the uncertainty in our projection and find that statistical uncertainty associated with the experimentally-derived thermal limits is the largest contributor (∼ 65%) to overall quantified uncertainty. This finding argues for more experimental work aimed at understanding and parameterizing the effects of climate change and variability on marine species. PMID:23284974
NASA Technical Reports Server (NTRS)
Nghiem, S. V.; Li, Fuk K.; Lou, Shu-Hsiang; Neumann, Gregory; McIntosh, Robert E.; Carson, Steven C.; Carswell, James R.; Walsh, Edward J.; Donelan, Mark A.; Drennan, William M.
1995-01-01
Ocean radar backscatter in the presence of large waves is investigated using data acquired with the Jet Propulsion Laboratory NUSCAT radar at Ku band for horizontal and vertical polarizations and the University of Massachusetts CSCAT radar at C band for vertical polarization during the Surface Wave Dynamics Experiment. Off-nadir backscatter data of ocean surfaces were obtained in the presence of large waves with significant wave height up to 5.6 m. In moderate-wind cases, effects of large waves are not detectable within the measurement uncertainty and no noticeable correlation between backscatter coefficients and wave height is found. Under high-wave light-wind conditions, backscatter is enhanced significantly at large incidence angles with a weaker effect at small incidence angles. Backscatter coefficients in the wind speed range under consideration are compared with SASS-2 (Ku band), CMOD3-H1 (C band), and Plant's model results which confirm the experimental observations. Variations of the friction velocity, which can give rise to the observed backscatter behaviors in the presence of large waves, are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
A model ensemble for projecting multi‐decadal coastal cliff retreat during the 21st century
Limber, Patrick; Barnard, Patrick; Vitousek, Sean; Erikson, Li
2018-01-01
Sea cliff retreat rates are expected to accelerate with rising sea levels during the 21st century. Here we develop an approach for a multi‐model ensemble that efficiently projects time‐averaged sea cliff retreat over multi‐decadal time scales and large (>50 km) spatial scales. The ensemble consists of five simple 1‐D models adapted from the literature that relate sea cliff retreat to wave impacts, sea level rise (SLR), historical cliff behavior, and cross‐shore profile geometry. Ensemble predictions are based on Monte Carlo simulations of each individual model, which account for the uncertainty of model parameters. The consensus of the individual models also weights uncertainty, such that uncertainty is greater when predictions from different models do not agree. A calibrated, but unvalidated, ensemble was applied to the 475 km‐long coastline of Southern California (USA), with 4 SLR scenarios of 0.5, 0.93, 1.5, and 2 m by 2100. Results suggest that future retreat rates could increase relative to mean historical rates by more than two‐fold for the higher SLR scenarios, causing an average total land loss of 19 – 41 m by 2100. However, model uncertainty ranges from +/‐ 5 – 15 m, reflecting the inherent difficulties of projecting cliff retreat over multiple decades. To enhance ensemble performance, future work could include weighting each model by its skill in matching observations in different morphological settings
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Rubin, Yoram; Maxwell, Reed M.
2009-06-01
Defining rational and effective hydrogeological data acquisition strategies is of crucial importance as such efforts are always resource limited. Usually, strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of their impacts on uncertainty. This paper presents an approach for determining site characterization needs on the basis of human health risk. The main challenge is in striking a balance between reduction in uncertainty in hydrogeological, behavioral, and physiological parameters. Striking this balance can provide clear guidance on setting priorities for data acquisition and for better estimating adverse health effects in humans. This paper addresses this challenge through theoretical developments and numerical simulation. A wide range of factors that affect site characterization needs are investigated, including the dimensions of the contaminant plume and additional length scales that characterize the transport problem, as well as the model of human health risk. The concept of comparative information yield curves is used for investigating the relative impact of hydrogeological and physiological parameters in risk. Results show that characterization needs are dependent on the ratios between flow and transport scales within a risk-driven approach. Additionally, the results indicate that human health risk becomes less sensitive to hydrogeological measurements for large plumes. This indicates that under near-ergodic conditions, uncertainty reduction in human health risk may benefit from better understanding of the physiological component as opposed to a more detailed hydrogeological characterization.
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
Intercomparison of model response and internal variability across climate model ensembles
NASA Astrophysics Data System (ADS)
Kumar, Devashish; Ganguly, Auroop R.
2017-10-01
Characterization of climate uncertainty at regional scales over near-term planning horizons (0-30 years) is crucial for climate adaptation. Climate internal variability (CIV) dominates climate uncertainty over decadal prediction horizons at stakeholders' scales (regional to local). In the literature, CIV has been characterized indirectly using projections of climate change from multi-model ensembles (MME) instead of directly using projections from multiple initial condition ensembles (MICE), primarily because adequate number of initial condition (IC) runs were not available for any climate model. Nevertheless, the recent availability of significant number of IC runs from one climate model allows for the first time to characterize CIV directly from climate model projections and perform a sensitivity analysis to study the dominance of CIV compared to model response variability (MRV). Here, we measure relative agreement (a dimensionless number with values ranging between 0 and 1, inclusive; a high value indicates less variability and vice versa) among MME and MICE and find that CIV is lower than MRV for all projection time horizons and spatial resolutions for precipitation and temperature. However, CIV exhibits greater dominance over MRV for seasonal and annual mean precipitation at higher latitudes where signals of climate change are expected to emerge sooner. Furthermore, precipitation exhibits large uncertainties and a rapid decline in relative agreement from global to continental, regional, or local scales for MICE compared to MME. The fractional contribution of uncertainty due to CIV is invariant for precipitation and decreases for temperature as lead time progresses towards the end of the century.
NASA Astrophysics Data System (ADS)
Zheng, Minghua
Cool-season extratropical cyclones near the U.S. East Coast often have significant impacts on the safety, health, environment and economy of this most densely populated region. Hence it is of vital importance to forecast these high-impact winter storm events as accurately as possible by numerical weather prediction (NWP), including in the medium-range. Ensemble forecasts are appealing to operational forecasters when forecasting such events because they can provide an envelope of likely solutions to serve user communities. However, it is generally accepted that ensemble outputs are not used efficiently in NWS operations mainly due to the lack of simple and quantitative tools to communicate forecast uncertainties and ensemble verification to assess model errors and biases. Ensemble sensitivity analysis (ESA), which employs a linear correlation and regression between a chosen forecast metric and the forecast state vector, can be used to analyze the forecast uncertainty development for both short- and medium-range forecasts. The application of ESA to a high-impact winter storm in December 2010 demonstrated that the sensitivity signals based on different forecast metrics are robust. In particular, the ESA based on the leading two EOF PCs can separate sensitive regions associated with cyclone amplitude and intensity uncertainties, respectively. The sensitivity signals were verified using the leave-one-out cross validation (LOOCV) method based on a multi-model ensemble from CMC, ECMWF, and NCEP. The climatology of ensemble sensitivities for the leading two EOF PCs based on 3-day and 6-day forecasts of historical cyclone cases was presented. It was found that the EOF1 pattern often represents the intensity variations while the EOF2 pattern represents the track variations along west-southwest and east-northeast direction. For PC1, the upper-level trough associated with the East Coast cyclone and its downstream ridge are important to the forecast uncertainty in cyclone strength. The initial differences in forecasting the ridge along the west coast of North America impact the EOF1 pattern most. For PC2, it was shown that the shift of the tri-polar structure is most significantly related to the cyclone track forecasts. The EOF/fuzzy clustering tool was applied to diagnose the scenarios in operational ensemble forecast of East Coast winter storms. It was shown that the clustering method could efficiently separate the forecast scenarios associated with East Coast storms based on the 90-member multi-model ensemble. A scenario-based ensemble verification method has been proposed and applied it to examine the capability of different EPSs in capturing the analysis scenarios for historical East Coast cyclone cases at lead times of 1-9 days. The results suggest that the NCEP model performs better in short-range forecasts in capturing the analysis scenario although it is under-dispersed. The ECMWF ensemble shows the best performance in the medium range. The CMC model is found to show the smallest percentage of members in the analysis group and a relatively high missing rate, suggesting that it is less reliable regarding capturing the analysis scenario when compared with the other two EPSs. A combination of NCEP and CMC models has been found to reduce the missing rate and improve the error-spread skill in medium- to extended-range forecasts. Based on the orthogonal features of the EOF patterns, the model errors for 1-6-day forecasts have been decomposed for the leading two EOF patterns. The results for error decomposition show that the NCEP model tends to better represent both EOF1 and EOF2 patterns by showing less intensity and displacement errors during 1-3 days. The ECMWF model is found to have the smallest errors in both EOF1 and EOF2 patterns during 4-6 days. We have also found that East Coast cyclones in the ECMWF forecast tend to be towards the southwest of the other two models in representing the EOF2 pattern, which is associated with the southwest-northeast shifting of the cyclone. This result suggests that ECMWF model may have a tendency to show a closer-to-shore solution in forecasting East Coast winter storms. The downstream impacts of Rossby wave packets (RWPs) on the predictability of winter storms are investigated to explore the source of ensemble uncertainties. The composited RWPA anomalies show that there are enhanced RWPs propagating across the Pacific in both large-error and large-spread cases over the verification regions. There are also indications that the errors might propagate with a speed comparable with the group velocity of RWPs. Based on the composite results as well as our observations of the operation daily RWPA, a conceptual model of errors/uncertainty development associated with RWPs has been proposed to serve as a practical tool to understand the evolution of forecast errors and uncertainties associated with the coherent RWPs originating from upstream as far as western Pacific. (Abstract shortened by ProQuest.).
Uncertainty in eddy covariance measurements and its application to physiological models
D.Y. Hollinger; A.D. Richardson; A.D. Richardson
2005-01-01
Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...
Planck 2015 results. III. LFI systematic uncertainties
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Burigana, C.; Butler, R. C.; Calabrese, E.; Catalano, A.; Christensen, P. R.; Colombo, L. P. L.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Frailis, M.; Franceschet, C.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Meinhold, P. R.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Nati, F.; Natoli, P.; Noviello, F.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Pearson, T. J.; Perdereau, O.; Pettorino, V.; Piacentini, F.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Stolyarov, V.; Stompor, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.
2016-09-01
We present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (I) simulations based on measured data and physical models of the known systematic effects; and (II) analysis of difference maps containing the same sky signal ("null-maps"). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrum by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10-20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.
Planck 2015 results: III. LFI systematic uncertainties
Ade, P. A. R.; Aumont, J.; Baccigalupi, C.; ...
2016-09-20
In this paper, we present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal (“null-maps”). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrummore » by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10–20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Finally, some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.« less
Carleton, W. Christopher; Campbell, David
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329
Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.
2012-01-01
Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.
Planck 2015 results: III. LFI systematic uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aumont, J.; Baccigalupi, C.
In this paper, we present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal (“null-maps”). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrummore » by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10–20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Finally, some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.« less
Stenemo, Fredrik; Jarvis, Nicholas
2007-09-01
A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry
Carleton, W Christopher; Campbell, David; Collard, Mark
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jianyong; Zhou, Ying; Gao, Yang
Background: It is anticipated that climate change will influence heat-related mortality in the future. However, the estimation of excess mortality attributable to future heat waves is subject to large uncertainties, which have not been examined under the latest greenhouse gas emission scenarios. Objectives: We estimated the future heat wave impact on mortality in the eastern United States (~ 1,700 counties) under two Representative Concentration Pathways (RCPs) and analyzed the sources of uncertainties. Methods Using dynamically downscaled hourly temperature projections in 2057-2059, we calculated heat wave days and episodes based on four heat wave metrics, and estimated the excess mortality attributablemore » to them. The sources of uncertainty in estimated excess mortality were apportioned using a variance-decomposition method. Results: In the eastern U.S., the excess mortality attributable to heat waves could range from 200-7,807 with the mean of 2,379 persons/year in 2057-2059. The projected average excess mortality in RCP 4.5 and 8.5 scenarios was 1,403 and 3,556 persons/year, respectively. Excess mortality would be relatively high in the southern and eastern coastal areas. The major sources of uncertainty in the estimates are relative risk of heat wave mortality, the RCP scenarios, and the heat wave definitions. Conclusions: The estimated mortality risks from future heat waves are likely an order of magnitude higher than its current level and lead to thousands of deaths each year under the RCP8.5 scenario. The substantial spatial variability in estimated county-level heat mortality suggests that effective mitigation and adaptation measures should be developed based on spatially resolved data.« less
Susanne Winter; Andreas Böck; Ronald E. McRoberts
2012-01-01
Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...
Inferring terrestrial photosynthetic light use efficiency of temperate ecosystems from space
Thomas Hilker; Nicholas C. Coops; Forest G. Hall; Caroline J. Nichol; Alexei Lyapustin; T. Andrew Black; Michael A. Wulder; Ray Leuning; Alan Barr; David Y. Hollinger; Bill Munger; Compton J. Tucker
2011-01-01
Terrestrial ecosystems absorb about 2.8 Gt C yrâ1, which is estimated to be about a quarter of the carbon emitted from fossil fuel combustion. However, the uncertainties of this sink are large, on the order of ±40%, with spatial and temporal variations largely unknown. One of the largest factors contributing to the uncertainty is photosynthesis,...
Paige F. B. Ferguson; Michael J. Conroy; John F. Chamblee; Jeffrey Hepinstall-Cymerman
2015-01-01
Parcelization and forest fragmentation are of concern for ecological, economic, and social reasons. Efforts to keep large, private forests intact may be supported by a decision-making process that incorporates landownersâ objectives and uncertainty. We used structured decision making (SDM) with owners of large, private forests in Macon County, North Carolina....
Large uncertainty in carbon uptake potential of land-based climate-change mitigation efforts.
Krause, Andreas; Pugh, Thomas A M; Bayer, Anita D; Li, Wei; Leung, Felix; Bondeau, Alberte; Doelman, Jonathan C; Humpenöder, Florian; Anthoni, Peter; Bodirsky, Benjamin L; Ciais, Philippe; Müller, Christoph; Murray-Tortarolo, Guillermo; Olin, Stefan; Popp, Alexander; Sitch, Stephen; Stehfest, Elke; Arneth, Almut
2018-07-01
Most climate mitigation scenarios involve negative emissions, especially those that aim to limit global temperature increase to 2°C or less. However, the carbon uptake potential in land-based climate change mitigation efforts is highly uncertain. Here, we address this uncertainty by using two land-based mitigation scenarios from two land-use models (IMAGE and MAgPIE) as input to four dynamic global vegetation models (DGVMs; LPJ-GUESS, ORCHIDEE, JULES, LPJmL). Each of the four combinations of land-use models and mitigation scenarios aimed for a cumulative carbon uptake of ~130 GtC by the end of the century, achieved either via the cultivation of bioenergy crops combined with carbon capture and storage (BECCS) or avoided deforestation and afforestation (ADAFF). Results suggest large uncertainty in simulated future land demand and carbon uptake rates, depending on the assumptions related to land use and land management in the models. Total cumulative carbon uptake in the DGVMs is highly variable across mitigation scenarios, ranging between 19 and 130 GtC by year 2099. Only one out of the 16 combinations of mitigation scenarios and DGVMs achieves an equivalent or higher carbon uptake than achieved in the land-use models. The large differences in carbon uptake between the DGVMs and their discrepancy against the carbon uptake in IMAGE and MAgPIE are mainly due to different model assumptions regarding bioenergy crop yields and due to the simulation of soil carbon response to land-use change. Differences between land-use models and DGVMs regarding forest biomass and the rate of forest regrowth also have an impact, albeit smaller, on the results. Given the low confidence in simulated carbon uptake for a given land-based mitigation scenario, and that negative emissions simulated by the DGVMs are typically lower than assumed in scenarios consistent with the 2°C target, relying on negative emissions to mitigate climate change is a highly uncertain strategy. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Prospects for GRB science with the Fermi Large Area Telescope
Band, D. L.; Axelsson, M.; Baldini, L.; ...
2009-08-04
The Large Area Telescope (LAT) instrument on the Fermi mission will reveal the rich spectral and temporal gamma-ray burst (GRB) phenomena in the >100 MeV band. The synergy with Fermi's Gamma-ray Burst Monitor detectors will link these observations to those in the well explored 10-1000 keV range; the addition of the >100 MeV band observations will resolve theoretical uncertainties about burst emission in both the prompt and afterglow phases. Trigger algorithms will be applied to the LAT data both onboard the spacecraft and on the ground. Furthermore, the sensitivity of these triggers will differ because of the available computing resourcesmore » onboard and on the ground. Here we present the LAT's burst detection methodologies and the instrument's GRB capabilities.« less
Huchra, J P
1992-04-17
The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.
Adam Duarte,; Hatfield, Jeffrey; Todd M. Swannack,; Michael R. J. Forstner,; M. Clay Green,; Floyd W. Weckerly,
2015-01-01
Population viability analyses provide a quantitative approach that seeks to predict the possible future status of a species of interest under different scenarios and, therefore, can be important components of large-scale species’ conservation programs. We created a model and simulated range-wide population and breeding habitat dynamics for an endangered woodland warbler, the golden-cheeked warbler (Setophaga chrysoparia). Habitat-transition probabilities were estimated across the warbler's breeding range by combining National Land Cover Database imagery with multistate modeling. Using these estimates, along with recently published demographic estimates, we examined if the species can remain viable into the future given the current conditions. Lastly, we evaluated if protecting a greater amount of habitat would increase the number of warblers that can be supported in the future by systematically increasing the amount of protected habitat and comparing the estimated terminal carrying capacity at the end of 50 years of simulated habitat change. The estimated habitat-transition probabilities supported the hypothesis that habitat transitions are unidirectional, whereby habitat is more likely to diminish than regenerate. The model results indicated population viability could be achieved under current conditions, depending on dispersal. However, there is considerable uncertainty associated with the population projections due to parametric uncertainty. Model results suggested that increasing the amount of protected lands would have a substantial impact on terminal carrying capacities at the end of a 50-year simulation. Notably, this study identifies the need for collecting the data required to estimate demographic parameters in relation to changes in habitat metrics and population density in multiple regions, and highlights the importance of establishing a common definition of what constitutes protected habitat, what management goals are suitable within those protected areas, and a standard operating procedure to identify areas of priority for habitat conservation efforts. Therefore, we suggest future efforts focus on these aspects of golden-cheeked warbler conservation and ecology.
Beam-specific planning volumes for scattered-proton lung radiotherapy
NASA Astrophysics Data System (ADS)
Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.
2014-08-01
This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Perdue, Gabriel
2016-11-10
The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies.more » It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.« less
NASA Astrophysics Data System (ADS)
Herzberg, C.; Asimow, P. D.
2015-02-01
An upgrade of the PRIMELT algorithm for calculating primary magma composition is given together with its implementation in PRIMELT3 MEGA.xlsm software. It supersedes PRIMELT2.xls in correcting minor mistakes in melt fraction and computed Ni content of olivine, it identifies residuum mineralogy, and it provides a thorough analysis of uncertainties in mantle potential temperature and olivine liquidus temperature. The uncertainty analysis was made tractable by the computation of olivine liquidus temperatures as functions of pressure and partial melt MgO content between the liquidus and solidus. We present a computed anhydrous peridotite solidus in T-P space using relations amongst MgO, T and P along the solidus; it compares well with experiments on the solidus. Results of the application of PRIMELT3 to a wide range of basalts shows that the mantle sources of ocean islands and large igneous provinces were hotter than oceanic spreading centers, consistent with earlier studies and expectations of the mantle plume model.
OARE flight maneuvers and calibration measurements on STS-58
NASA Technical Reports Server (NTRS)
Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.; Larman, Kevin T.
1994-01-01
The Orbital Acceleration Research Experiment (OARE), which has flown on STS-40, STS-50, and STS-58, contains a three axis accelerometer with a single, nonpendulous, electrostatically suspended proofmass which can resolve accelerations to the nano-g level. The experiment also contains a full calibration station to permit in situ bias and scale factor calibration. This on-orbit calibration capability eliminates the large uncertainty of ground-based calibrations encountered with accelerometers flown in the past on the orbiter, thus providing absolute acceleration measurement accuracy heretofore unachievable. This is the first time accelerometer scale factor measurements have been performed on orbit. A detailed analysis of the calibration process is given along with results of the calibration factors from the on-orbit OARE flight measurements on STS-58. In addition, the analysis of OARE flight maneuver data used to validate the scale factor measurements in the sensor's most sensitive range is also presented. Estimates on calibration uncertainties are discussed. This provides bounds on the STS-58 absolute acceleration measurements for future applications.
Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault
Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.
2004-01-01
Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.
The future viability of algae-derived biodiesel under economic and technical uncertainties.
Brownbridge, George; Azadi, Pooya; Smallbone, Andrew; Bhave, Amit; Taylor, Benjamin; Kraft, Markus
2014-01-01
This study presents a techno-economic assessment of algae-derived biodiesel under economic and technical uncertainties associated with the development of algal biorefineries. A global sensitivity analysis was performed using a High Dimensional Model Representation (HDMR) method. It was found that, considering reasonable ranges over which each parameter can vary, the sensitivity of the biodiesel production cost to the key input parameters decreases in the following order: algae oil content>algae annual productivity per unit area>plant production capacity>carbon price increase rate. It was also found that the Return on Investment (ROI) is highly sensitive to the algae oil content, and to a lesser extent to the algae annual productivity, crude oil price and price increase rate, plant production capacity, and carbon price increase rate. For a large scale plant (100,000 tonnes of biodiesel per year) the production cost of biodiesel is likely to be £0.8-1.6 per kg. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Boville, Byron A.; Baumhefner, David P.
1990-01-01
Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.
Fermi LAT observations of cosmic-ray electrons from 7 GeV to 1 TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.
We present the results of our analysis of cosmic-ray electrons using about 8 × 10 6 electron candidates detected in the first 12 months on-orbit by the Fermi Large Area Telescope. This work extends our previously published cosmic-ray electron spectrum down to 7 GeV, giving a spectral range of approximately 2.5 decades up to 1 TeV. We describe in detail the analysis and its validation using beam-test and on-orbit data. In addition, we describe the spectrum measured via a subset of events selected for the best energy resolution as a cross-check on the measurement using the full event sample. Ourmore » electron spectrum can be described with a power law ∝ E - 3.08 ± 0.05 with no prominent spectral features within systematic uncertainties. Within the limits of our uncertainties, we can accommodate a slight spectral hardening at around 100 GeV and a slight softening above 500 GeV.« less
Fermi LAT observations of cosmic-ray electrons from 7 GeV to 1 TeV
Ackermann, M.
2010-11-01
We present the results of our analysis of cosmic-ray electrons using about 8 × 10 6 electron candidates detected in the first 12 months on-orbit by the Fermi Large Area Telescope. This work extends our previously published cosmic-ray electron spectrum down to 7 GeV, giving a spectral range of approximately 2.5 decades up to 1 TeV. We describe in detail the analysis and its validation using beam-test and on-orbit data. In addition, we describe the spectrum measured via a subset of events selected for the best energy resolution as a cross-check on the measurement using the full event sample. Ourmore » electron spectrum can be described with a power law ∝ E - 3.08 ± 0.05 with no prominent spectral features within systematic uncertainties. Within the limits of our uncertainties, we can accommodate a slight spectral hardening at around 100 GeV and a slight softening above 500 GeV.« less
Fermi LAT observations of cosmic-ray electrons from 7 GeV to 1 TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Bechtol, K.
We present the results of our analysis of cosmic-ray electrons using about 8x10{sup 6} electron candidates detected in the first 12 months on-orbit by the Fermi Large Area Telescope. This work extends our previously published cosmic-ray electron spectrum down to 7 GeV, giving a spectral range of approximately 2.5 decades up to 1 TeV. We describe in detail the analysis and its validation using beam-test and on-orbit data. In addition, we describe the spectrum measured via a subset of events selected for the best energy resolution as a cross-check on the measurement using the full event sample. Our electron spectrummore » can be described with a power law {proportional_to}E{sup -3.08{+-}0.05} with no prominent spectral features within systematic uncertainties. Within the limits of our uncertainties, we can accommodate a slight spectral hardening at around 100 GeV and a slight softening above 500 GeV.« less
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Parton shower and NLO-matching uncertainties in Higgs boson pair production
NASA Astrophysics Data System (ADS)
Jones, Stephen; Kuttimalai, Silvan
2018-02-01
We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. We observe large uncertainties even in regions of phase space where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.
Λ b→pl⁻ν¯ l form factors from lattice QCD with static b quarks
Detmold, William; Lin, C.-J. David; Meinel, Stefan; ...
2013-07-23
We present a lattice QCD calculation of form factors for the decay Λ b→pμ⁻ν¯ μ, which is a promising channel for determining the Cabibbo-Kobayashi-Maskawa matrix element |V ub| at the Large Hadron Collider. In this initial study we work in the limit of static b quarks, where the number of independent form factors reduces to two. We use dynamical domain-wall fermions for the light quarks, and perform the calculation at two different lattice spacings and at multiple values of the light-quark masses in a single large volume. Using our form factor results, we calculate the Λ b→pμ⁻ν¯ μ differential decaymore » rate in the range 14 GeV²≤q²≤q² max, and obtain the integral ∫ q²max 14 GeV²[dΓ/dq²]dq²/|V ub|²=15.3±4.2 ps⁻¹. Combined with future experimental data, this will give a novel determination of |V ub| with about 15% theoretical uncertainty. The uncertainty is dominated by the use of the static approximation for the b quark, and can be reduced further by performing the lattice calculation with a more sophisticated heavy-quark action.« less
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
Uncertainties for Swiss LWR spent nuclear fuels due to nuclear data
NASA Astrophysics Data System (ADS)
Rochman, Dimitri A.; Vasiliev, Alexander; Dokhane, Abdelhamid; Ferroukhi, Hakim
2018-05-01
This paper presents a study of the impact of the nuclear data (cross sections, neutron emission and spectra) on different quantities for spent nuclear fuels (SNF) from Swiss power plants: activities, decay heat, neutron and gamma sources and isotopic vectors. Realistic irradiation histories are considered using validated core follow-up models based on CASMO and SIMULATE. Two Pressurized and one Boiling Water Reactors (PWR and BWR) are considered over a large number of operated cycles. All the assemblies at the end of the cycles are studied, being reloaded or finally discharged, allowing spanning over a large range of exposure (from 4 to 60 MWd/kgU for ≃9200 assembly-cycles). Both UO2 and MOX fuels were used during the reactor cycles, with enrichments from 1.9 to 4.7% for the UO2 and 2.2 to 5.8% Pu for the MOX. The SNF characteristics presented in this paper are calculated with the SNF code. The calculated uncertainties, based on the ENDF/B-VII.1 library are obtained using a simple Monte Carlo sampling method. It is demonstrated that the impact of nuclear data is relatively important (e.g. up to 17% for the decay heat), showing the necessity to consider them for safety analysis of the SNF handling and disposal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Jonathan H., E-mail: jonathan.h.m.davis@gmail.com
2015-03-01
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable throughmore » a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable throughmore » a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.« less
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Inoue, Tatsuya; Widder, Joachim; van Dijk, Lisanne V; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I; Sasai, Keisuke; Van't Veld, Aart A; Langendijk, Johannes A; Korevaar, Erik W
2016-11-01
To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D2 - D98, where D2 and D98 are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and organ-at-risk dose parameters. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yanai, R. D.; Bae, K.; Levine, C. R.; Lilly, P.; Vadeboncoeur, M. A.; Fatemi, F. R.; Blum, J. D.; Arthur, M.; Hamburg, S.
2013-12-01
Ecosystem nutrient budgets are difficult to construct and even more difficult to replicate. As a result, uncertainty in the estimates of pools and fluxes are rarely reported, and opportunities to assess confidence through replicated measurements are rare. In this study, we report nutrient concentrations and contents of soil and biomass pools in northern hardwood stands in replicate plots within replicate stands in 3 age classes (14-19 yr, 26-29 yr, and > 100 yr) at the Bartlett Experimental Forest, USA. Soils were described by quantitative soil pits in three plots per stand, excavated by depth increment to the C horizon and analyzed by a sequential extraction procedure. Variation in soil mass among pits within stands averaged 28% (coefficient of variation); variation among stands within an age class ranged from 9-25%. Variation in nutrient concentrations were higher still (averaging 38%, within element, depth increment, and extraction type), perhaps because the depth increments contained varying proportions of genetic horizons. To estimate nutrient contents of aboveground biomass, we propagated model uncertainty through allometric equations, and found errors ranging from 3-7%, depending on the stand. The variation in biomass among plots within stands (6-19%) was always larger than the allometric uncertainties. Variability in measured nutrient concentrations of tree tissues were more variable than the uncertainty in biomass. Foliage had the lowest variability (averaging 16% for Ca, Mg, K, N and P within age class and species), and wood had the highest (averaging 30%), when reported in proportion to the mean, because concentrations in wood are low. For Ca content of aboveground biomass, sampling variation was the greatest source of uncertainty. Coefficients of variation among plots within a stand averaged 16%; stands within an age class ranged from 5-25% CV, including uncertainties in tree allometry and tissue chemistry. Uncertainty analysis can help direct research effort to areas most in need of improvement. In systems such as the one we studied, more intensive sampling would be the best approach to reducing uncertainty, as natural spatial variation was higher than model or measurement uncertainties.
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
Pindado Jiménez, Oscar; Pérez Pastor, Rosa Ma; Escolano Segovia, Olga; del Reino Querencia, Susana
2015-01-01
This work proposes an analytical procedure for measuring aliphatic and aromatic hydrocarbons fractions present in groundwater. In this method, hydrocarbons are solid phase extracted (SPE) twice from the groundwater and the resulting fractions are analyzed by gas chromatography with flame ionization detection. The first SPE disposes the hydrocarbons present in groundwater in organic solvents and the second SPE divides them into aliphatic and aromatic hydrocarbons. The validation study is carried out and its uncertainties are discussed. Identifying the main sources of uncertainty is evaluated through applying the bottom-up approach. Limits of detection for hydrocarbons ranges are below 5 µg L(-1), precision is not above of 30%, and acceptable recoveries are reached for aliphatic and aromatic fractions studied. The uncertainty due to volume of the sample, factor of calibration and recovery are the highest contributions. The expanded uncertainty range from 13% to 26% for the aliphatic hydrocarbons ranges and from 14% to 23% for the aromatic hydrocarbons ranges. As application, the proposed method is satisfactorily applied to a set of groundwater samples collected in a polluted area where there is evidence to present a high degree of hydrocarbons. The results have shown the range of aliphatic hydrocarbons >C21-C35 is the most abundant, with values ranging from 215 µg L(-1) to 354 µg L(-1), which it is associated to a contamination due to diesel. Copyright © 2014 Elsevier B.V. All rights reserved.
Climate change streamflow scenarios designed for critical period water resources planning studies
NASA Astrophysics Data System (ADS)
Hamlet, A. F.; Snover, A. K.; Lettenmaier, D. P.
2003-04-01
Long-range water planning in the United States is usually conducted by individual water management agencies using a critical period planning exercise based on a particular period of the observed streamflow record and a suite of internally-developed simulation tools representing the water system. In the context of planning for climate change, such an approach is flawed in that it assumes that the future climate will be like the historic record. Although more sophisticated planning methods will probably be required as time goes on, a short term strategy for incorporating climate uncertainty into long-range water planning as soon as possible is to create alternate inputs to existing planning methods that account for climate uncertainty as it affects both supply and demand. We describe a straight-forward technique for constructing streamflow scenarios based on the historic record that include the broad-based effects of changed regional climate simulated by several global climate models (GCMs). The streamflow scenarios are based on hydrologic simulations driven by historic climate data perturbed according to regional climate signals from four GCMs using the simple "delta" method. Further data processing then removes systematic hydrologic model bias using a quantile-based bias correction scheme, and lastly, the effects of random errors in the raw hydrologic simulations are removed. These techniques produce streamflow scenarios that are consistent in time and space with the historic streamflow record while incorporating fundamental changes in temperature and precipitation from the GCM scenarios. Planning model simulations based on these climate change streamflow scenarios can therefore be compared directly to planning model simulations based on the historic record of streamflows to help planners understand the potential impacts of climate uncertainty. The methods are currently being tested and refined in two large-scale planning exercises currently being conducted in the Pacific Northwest (PNW) region of the US, and the resulting streamflow scenarios will be made freely available on the internet for a large number of sites in the PNW to help defray the costs of including climate change information in other studies.
Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6
NASA Astrophysics Data System (ADS)
Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.
2017-01-01
This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di
The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.
USGS Polar Temperature Logging System, Description and Measurement Uncertainties
Clow, Gary D.
2008-01-01
This paper provides an updated technical description of the USGS Polar Temperature Logging System (PTLS) and a complete assessment of the measurement uncertainties. This measurement system is used to acquire subsurface temperature data for climate-change detection in the polar regions and for reconstructing past climate changes using the 'borehole paleothermometry' inverse method. Specifically designed for polar conditions, the PTLS can measure temperatures as low as -60 degrees Celsius with a sensitivity ranging from 0.02 to 0.19 millikelvin (mK). A modular design allows the PTLS to reach depths as great as 4.5 kilometers with a skid-mounted winch unit or 650 meters with a small helicopter-transportable unit. The standard uncertainty (uT) of the ITS-90 temperature measurements obtained with the current PTLS range from 3.0 mK at -60 degrees Celsius to 3.3 mK at 0 degrees Celsius. Relative temperature measurements used for borehole paleothermometry have a standard uncertainty (urT) whose upper limit ranges from 1.6 mK at -60 degrees Celsius to 2.0 mK at 0 degrees Celsius. The uncertainty of a temperature sensor's depth during a log depends on specific borehole conditions and the temperature near the winch and thus must be treated on a case-by-case basis. However, recent experience indicates that when logging conditions are favorable, the 4.5-kilometer system is capable of producing depths with a standard uncertainty (uZ) on the order of 200-250 parts per million.
Treatment planning optimisation in proton therapy
McGowan, S E; Burnet, N G; Lomax, A J
2013-01-01
ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya
2018-01-01
Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244