Forecasting eruption size: what we know, what we don't know
NASA Astrophysics Data System (ADS)
Papale, Paolo
2017-04-01
Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.
NASA Astrophysics Data System (ADS)
Xu, Rong; Liu, Yongsheng
2016-12-01
The Emeishan large igneous province (ELIP) is renowned for its world-class Ni-Cu-(PGE) deposits and its link with the Capitanian mass extinction. The ELIP is generally thought to be associated with a deep mantle plume; however, evidence for such a model has been challenged through geology, geophysics and geochemistry. In many large igneous province settings, olivine-melt equilibrium thermometry has been used to argue for or against the existence of plumes. However, this method involves large uncertainties such as assumptions regarding melt compositions and crystallisation pressures. The Al-in-olivine thermometer avoids these uncertainties and is used here to estimate the temperatures of picrites in the ELIP. The calculated maximum temperature (1440 °C) is significantly ( 250 °C) higher than the Al-in-olivine temperature estimated for the average MORB, thus providing compelling evidence for the existence of thermal mantle plumes in the ELIP.
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
Shuang Ma; Jiang Jiang; Yuanyuan Huang; Zheng Shi; Rachel M. Wilson; Daniel Ricciuto; Stephen D. Sebestyen; Paul J. Hanson; Yiqi Luo
2017-01-01
Large uncertainties exist in predicting responses of wetland methane (CH4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH4 flux...
NASA Astrophysics Data System (ADS)
Brown, G.
2017-12-01
Sediment diversions have been proposed as a crucial component of the restoration of Coastal Louisiana. They are generally characterized as a means of creating land by mimicking natural crevasse-splay sub-delta processes. However, the criteria that are often promoted to optimize the performance of these diversions (i.e. large, sand-rich diversions into existing, degraded wetlands) are at odds with the natural processes that govern the development of crevasse-splay sub-deltas (typically sand-lean or sand-neutral diversions into open water). This is due in large part to the fact that these optimization criteria have been developed in the absence of consideration for the natural constraints associated with fundamental hydraulics: specifically, the conservation of mechanical energy. Although the implementation of the aforementioned optimization criteria have the potential to greatly increase the land-building capacity of a given diversion, the concomitant widespread inundation of the existing wetlands (an unavoidable consequence of diverting into a shallow, vegetated embayment), and the resultant stresses on existing wetland vegetation, have the potential to dramatically accelerate the loss of these existing wetlands. Hence, there are inherent uncertainties in the forecasted performance of sediment diversions that are designed according to the criteria mentioned above. This talk details the reasons for these uncertainties, using analytic and numerical model results, together with evidence from field observations and experiments. The likelihood that, in the foreseeable future, these uncertainties can be reduced, or even rationally bounded, is discussed.
Uncertainty in gridded CO 2 emissions estimates
Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...
2016-05-19
We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less
Uncertainties in derived temperature-height profiles
NASA Technical Reports Server (NTRS)
Minzner, R. A.
1974-01-01
Nomographs were developed for relating uncertainty in temperature T to uncertainty in the observed height profiles of both pressure p and density rho. The relative uncertainty delta T/T is seen to depend not only upon the relative uncertainties delta P/P or delta rho/rho, and to a small extent upon the value of T or H, but primarily upon the sampling-height increment Delta h, the height increment between successive observations of p or delta. For a fixed value of delta p/p, the value of delta T/T varies inversely with Delta h. No limit exists in the fineness of usable height resolution of T which may be derived from densities, while a fine height resolution in pressure-height data leads to temperatures with unacceptably large uncertainties.
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Constraints on spin-dependent parton distributions at large x from global QCD analysis
Jimenez-Delgado, P.; Avakian, H.; Melnitchouk, W.
2014-09-28
This study investigate the behavior of spin-dependent parton distribution functions (PDFs) at large parton momentum fractions x in the context of global QCD analysis. We explore the constraints from existing deep-inelastic scattering data, and from theoretical expectations for the leading x → 1 behavior based on hard gluon exchange in perturbative QCD. Systematic uncertainties from the dependence of the PDFs on the choice of parametrization are studied by considering functional forms motivated by orbital angular momentum arguments. Finally, we quantify the reduction in the PDF uncertainties that may be expected from future high-x data from Jefferson Lab at 12 GeV.
High voltage system: Plasma interaction summary
NASA Technical Reports Server (NTRS)
Stevens, N. John
1986-01-01
The possible interactions that could exist between a high voltage system and the space plasma environment are reviewed. A solar array is used as an example of such a system. The emphasis in this review is on the discrepancies that exist in this technology in both flight and ground experiment data. It has been found that, in ground testing, there are facility effects, cell size effects and area scaling uncertainties. For space applications there are area scaling and discharge concerns for an array as well as the influence of the large space structures on the collection process. There are still considerable uncertainties in the high voltage-space plasma interaction technology even after several years of effort.
A Practical Approach to Address Uncertainty in Stakeholder Deliberations.
Gregory, Robin; Keeney, Ralph L
2017-03-01
This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions. © 2017 Society for Risk Analysis.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
NASA Astrophysics Data System (ADS)
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
Large storage operations under climate change: expanding uncertainties and evolving tradeoffs
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Anghileri, Daniela; Castelletti, Andrea; Vu, Phuong Nam; Soncini-Sessa, Rodolfo
2016-03-01
In a changing climate and society, large storage systems can play a key role for securing water, energy, and food, and rebalancing their cross-dependencies. In this letter, we study the role of large storage operations as flexible means of adaptation to climate change. In particular, we explore the impacts of different climate projections for different future time horizons on the multi-purpose operations of the existing system of large dams in the Red River basin (China-Laos-Vietnam). We identify the main vulnerabilities of current system operations, understand the risk of failure across sectors by exploring the evolution of the system tradeoffs, quantify how the uncertainty associated to climate scenarios is expanded by the storage operations, and assess the expected costs if no adaptation is implemented. Results show that, depending on the climate scenario and the time horizon considered, the existing operations are predicted to change on average from -7 to +5% in hydropower production, +35 to +520% in flood damages, and +15 to +160% in water supply deficit. These negative impacts can be partially mitigated by adapting the existing operations to future climate, reducing the loss of hydropower to 5%, potentially saving around 34.4 million US year-1 at the national scale. Since the Red River is paradigmatic of many river basins across south east Asia, where new large dams are under construction or are planned to support fast growing economies, our results can support policy makers in prioritizing responses and adaptation strategies to the changing climate.
NASA Astrophysics Data System (ADS)
Suess, Daniel; Rudnicki, Łukasz; maciel, Thiago O.; Gross, David
2017-09-01
The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However—the large number of partial results and heuristics notwithstanding—no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. We prove two versions of the result: one for frequentist and one for Bayesian statistics.
Unconventional nozzle tradeoff study. [space tug propulsion
NASA Technical Reports Server (NTRS)
Obrien, C. J.
1979-01-01
Plug cluster engine design, performance, weight, envelope, operational characteristics, development cost, and payload capability, were evaluated and comparisons were made with other space tug engine candidates using oxygen/hydrogen propellants. Parametric performance data were generated for existing developed or high technology thrust chambers clustered around a plug nozzle of very large diameter. The uncertainties in the performance prediction of plug cluster engines with large gaps between the modules (thrust chambers) were evaluated. The major uncertainty involves, the aerodynamics of the flow from discrete nozzles, and the lack of this flow to achieve the pressure ratio corresponding to the defined area ratio for a plug cluster. This uncertainty was reduced through a cluster design that consists of a plug contour that is formed from the cluster of high area ratio bell nozzles that have been scarfed. Light-weight, high area ratio, bell nozzles were achieved through the use of AGCarb (carbon-carbon cloth) nozzle extensions.
How uncertain are climate model projections of water availability indicators across the Middle East?
Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil
2010-11-28
The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
First and Higher Order Effects on Zero Order Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2014-12-01
Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Monitoring, reporting and verifying emissions in the climate economy
NASA Astrophysics Data System (ADS)
Bellassen, Valentin; Stephan, Nicolas; Afriat, Marion; Alberola, Emilie; Barker, Alexandra; Chang, Jean-Pierre; Chiquet, Caspar; Cochran, Ian; Deheza, Mariana; Dimopoulos, Christopher; Foucherot, Claudine; Jacquier, Guillaume; Morel, Romain; Robinson, Roderick; Shishlov, Igor
2015-04-01
The monitoring, reporting and verification (MRV) of greenhouse-gas emissions is the cornerstone of carbon pricing and management mechanisms. Here we consider peer-reviewed articles and 'grey literature' related to existing MRV requirements and their costs. A substantial part of the literature is the regulatory texts of the 15 most important carbon pricing and management mechanisms currently implemented. Based on a comparison of key criteria such as the scope, cost, uncertainty and flexibility of procedures, we conclude that conventional wisdom on MRV is not often promoted in existing carbon pricing mechanisms. Quantification of emissions uncertainty and incentives to reduce this uncertainty are usually only partially applied, if at all. Further, the time and resources spent on small sources of emissions would be expected to be limited. Although provisions aiming at an effort proportionate to the amount of emissions at stake -- 'materiality' -- are widespread, they are largely outweighed by economies of scale: in all schemes, MRV costs per tonne are primarily driven by the size of the source.
Measurement of Emissions from Produced Water Ponds: Upstream Oil and Gas Study #1; Final Report
Significant uncertainty exists regarding air pollutant emissions from upstream oil and gas production operations. Oil and gas operations present unique and challenging emission testing issues due to the large variety and quantity of potential emissions sources. This report summ...
Survey of Existing Uncertainty Quantification Capabilities for Army Relevant Problems
2017-11-27
ARL-TR-8218•NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J...NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J Ramsey...Rev. 8/98) Prescribed by ANSI Std. Z39.18 November 2017 Technical Report Survey of Existing Uncertainty Quantification Capabilities for Army
Stellar Evolution and Modelling Stars
NASA Astrophysics Data System (ADS)
Silva Aguirre, Víctor
In this chapter I give an overall description of the structure and evolution of stars of different masses, and review the main ingredients included in state-of-the-art calculations aiming at reproducing observational features. I give particular emphasis to processes where large uncertainties still exist as they have strong impact on stellar properties derived from large compilations of tracks and isochrones, and are therefore of fundamental importance in many fields of astrophysics.
NASA Astrophysics Data System (ADS)
Perrault, Matthieu; Gueguen, Philippe; Aldea, Alexandru; Demetriu, Sorin
2013-12-01
The lack of knowledge concerning modelling existing buildings leads to signifiant variability in fragility curves for single or grouped existing buildings. This study aims to investigate the uncertainties of fragility curves, with special consideration of the single-building sigma. Experimental data and simplified models are applied to the BRD tower in Bucharest, Romania, a RC building with permanent instrumentation. A three-step methodology is applied: (1) adjustment of a linear MDOF model for experimental modal analysis using a Timoshenko beam model and based on Anderson's criteria, (2) computation of the structure's response to a large set of accelerograms simulated by SIMQKE software, considering twelve ground motion parameters as intensity measurements (IM), and (3) construction of the fragility curves by comparing numerical interstory drift with the threshold criteria provided by the Hazus methodology for the slight damage state. By introducing experimental data into the model, uncertainty is reduced to 0.02 considering S d ( f 1) as seismic intensity IM and uncertainty related to the model is assessed at 0.03. These values must be compared with the total uncertainty value of around 0.7 provided by the Hazus methodology.
Forecasting the Cumulative Impacts of Dams on the Mekong Delta: Certainties and Uncertainties
NASA Astrophysics Data System (ADS)
Kondolf, G. M.; Rubin, Z.; Schmitt, R. J. P.
2016-12-01
The Mekong River basin is undergoing rapid hydroelectric development, with 7 large mainstem dams on the upper Mekong (Lancang) River in China and 133 dams planned for the Lower Mekong River basin (Laos, Cambodia, Thailand, Vietnam), 11 of which are on the mainstem. Prior analyses have shown that all these dams built as initially proposed would trap 96% of the natural sediment load to the Mekong Delta. Such a reduction in sediment supply would compromise the sustainability of the delta itself, but there are many uncertainties in the timing and pattern of land loss. The river will first erode in-channel sediment deposits, partly compensating for upstream sediment trapping until these deposits are exhausted. Other complicating factors include basin-wide accelerated land-use change, road construction, instream sand mining, dyking-off floodplains, and changing climate, accelerated subsidence from groundwater extraction, and sea level rise. It is certain that the Mekong Delta will undergo large changes in the coming decades, changes that will threaten its very existence. However, the multiplicity of compounding drivers and lack of good data lead to large uncertainties in forecasting changes in the sediment balance on the scale of a very large network. We quantify uncertainties in available data and consider changes due to additional, poorly quantified drivers (e.g., road construction), putting these drivers in perspective with the overall sediment budget. We developed a set of most-likely scenarios and their implications for the delta's future. Uncertainties are large, but there are certainties about the delta's future. If its sediment supply is nearly completely cut off (as would be the case with `business-as-usual' ongoing dam construction and sediment extraction), the Delta is certainly doomed to disappear in the face of rising seas, subsidence, and coastal erosion. The uncertainty is only when and how precisely the loss will progress.
Robust Detection of Examinees with Aberrant Answer Changes
ERIC Educational Resources Information Center
Belov, Dmitry I.
2015-01-01
The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at testing organizations. However, AC data has an uncertainty caused by technological or human factors. Therefore, existing statistics (e.g., number of wrong-to-right ACs) used to detect examinees…
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air–surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen rati...
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Kaiser, Maria; Špačková, Olga; Rieger, Wolfgang; Disse, Markus; Straub, Daniel
2018-05-01
Planning authorities are faced with a range of questions when planning flood protection measures: is the existing protection adequate for current and future demands or should it be extended? How will flood patterns change in the future? How should the uncertainty pertaining to this influence the planning decision, e.g., for delaying planning or including a safety margin? Is it sufficient to follow a protection criterion (e.g., to protect from the 100-year flood) or should the planning be conducted in a risk-based way? How important is it for flood protection planning to accurately estimate flood frequency (changes), costs and damage? These are questions that we address for a medium-sized pre-alpine catchment in southern Germany, using a sequential Bayesian decision making framework that quantitatively addresses the full spectrum of uncertainty. We evaluate different flood protection systems considered by local agencies in a test study catchment. Despite large uncertainties in damage, cost and climate, the recommendation is robust for the most conservative approach. This demonstrates the feasibility of making robust decisions under large uncertainty. Furthermore, by comparison to a previous study, it highlights the benefits of risk-based planning over the planning of flood protection to a prescribed return period.
Influence of internal variability on population exposure to hydroclimatic changes
NASA Astrophysics Data System (ADS)
Mankin, Justin S.; Viviroli, Daniel; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.; Horton, Radley M.; E Smerdon, Jason; Diffenbaugh, Noah S.
2017-04-01
Future freshwater supply, human water demand, and people’s exposure to water stress are subject to multiple sources of uncertainty, including unknown future pathways of fossil fuel and water consumption, and ‘irreducible’ uncertainty arising from internal climate system variability. Such internal variability can conceal forced hydroclimatic changes on multi-decadal timescales and near-continental spatial-scales. Using three projections of population growth, a large ensemble from a single Earth system model, and assuming stationary per capita water consumption, we quantify the likelihoods of future population exposure to increased hydroclimatic deficits, which we define as the average duration and magnitude by which evapotranspiration exceeds precipitation in a basin. We calculate that by 2060, ∽31%-35% of the global population will be exposed to >50% probability of hydroclimatic deficit increases that exceed existing hydrological storage, with up to 9% of people exposed to >90% probability. However, internal variability, which is an irreducible uncertainty in climate model predictions that is under-sampled in water resource projections, creates substantial uncertainty in predicted exposure: ∽86%-91% of people will reside where irreducible uncertainty spans the potential for both increases and decreases in sub-annual water deficits. In one population scenario, changes in exposure to large hydroclimate deficits vary from -3% to +6% of global population, a range arising entirely from internal variability. The uncertainty in risk arising from irreducible uncertainty in the precise pattern of hydroclimatic change, which is typically conflated with other uncertainties in projections, is critical for climate risk management that seeks to optimize adaptations that are robust to the full set of potential real-world outcomes.
Quantification of water resources uncertainties in the Luvuvhu sub-basin of the Limpopo river basin
NASA Astrophysics Data System (ADS)
Oosthuizen, N.; Hughes, D.; Kapangaziwiri, E.; Mwenge Kahinda, J.; Mvandaba, V.
2018-06-01
In the absence of historical observed data, models are generally used to describe the different hydrological processes and generate data and information that will inform management and policy decision making. Ideally, any hydrological model should be based on a sound conceptual understanding of the processes in the basin and be backed by quantitative information for the parameterization of the model. However, these data are often inadequate in many sub-basins, necessitating the incorporation of the uncertainty related to the estimation process. This paper reports on the impact of the uncertainty related to the parameterization of the Pitman monthly model and water use data on the estimates of the water resources of the Luvuvhu, a sub-basin of the Limpopo river basin. The study reviews existing information sources associated with the quantification of water balance components and gives an update of water resources of the sub-basin. The flows generated by the model at the outlet of the basin were between 44.03 Mm3 and 45.48 Mm3 per month when incorporating +20% uncertainty to the main physical runoff generating parameters. The total predictive uncertainty of the model increased when water use data such as small farm and large reservoirs and irrigation were included. The dam capacity data was considered at an average of 62% uncertainty mainly as a result of the large differences between the available information in the national water resources database and that digitised from satellite imagery. Water used by irrigated crops was estimated with an average of about 50% uncertainty. The mean simulated monthly flows were between 38.57 Mm3 and 54.83 Mm3 after the water use uncertainty was added. However, it is expected that the uncertainty could be reduced by using higher resolution remote sensing imagery.
NASA Astrophysics Data System (ADS)
Lu, Shasha; Guan, Xingliang; Zhou, Min; Wang, Yang
2014-05-01
A large number of mathematical models have been developed to support land resource allocation decisions and land management needs; however, few of them can address various uncertainties that exist in relation to many factors presented in such decisions (e.g., land resource availabilities, land demands, land-use patterns, and social demands, as well as ecological requirements). In this study, a multi-objective interval-stochastic land resource allocation model (MOISLAM) was developed for tackling uncertainty that presents as discrete intervals and/or probability distributions. The developed model improves upon the existing multi-objective programming and inexact optimization approaches. The MOISLAM not only considers economic factors, but also involves food security and eco-environmental constraints; it can, therefore, effectively reflect various interrelations among different aspects in a land resource management system. Moreover, the model can also help examine the reliability of satisfying (or the risk of violating) system constraints under uncertainty. In this study, the MOISLAM was applied to a real case of long-term urban land resource allocation planning in Suzhou, in the Yangtze River Delta of China. Interval solutions associated with different risk levels of constraint violation were obtained. The results are considered useful for generating a range of decision alternatives under various system conditions, and thus helping decision makers to identify a desirable land resource allocation strategy under uncertainty.
The observed clustering of damaging extra-tropical cyclones in Europe
NASA Astrophysics Data System (ADS)
Cusack, S.
2015-12-01
The clustering of severe European windstorms on annual timescales has substantial impacts on the re/insurance industry. Management of the risk is impaired by large uncertainties in estimates of clustering from historical storm datasets typically covering the past few decades. The uncertainties are unusually large because clustering depends on the variance of storm counts. Eight storm datasets are gathered for analysis in this study in order to reduce these uncertainties. Six of the datasets contain more than 100~years of severe storm information to reduce sampling errors, and the diversity of information sources and analysis methods between datasets sample observational errors. All storm severity measures used in this study reflect damage, to suit re/insurance applications. It is found that the shortest storm dataset of 42 years in length provides estimates of clustering with very large sampling and observational errors. The dataset does provide some useful information: indications of stronger clustering for more severe storms, particularly for southern countries off the main storm track. However, substantially different results are produced by removal of one stormy season, 1989/1990, which illustrates the large uncertainties from a 42-year dataset. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm datasets show a greater degree of clustering with increasing storm severity and suggest clustering of severe storms is much more material than weaker storms. Further, they contain signs of stronger clustering in areas off the main storm track, and weaker clustering for smaller-sized areas, though these signals are smaller than uncertainties in actual values. Both the improvement of existing storm records and development of new historical storm datasets would help to improve management of this risk.
Elastic and inelastic neutron scattering cross sections for fission reactor applications
NASA Astrophysics Data System (ADS)
Hicks, S. F.; Chakraborty, A.; Combs, B.; Crider, B. P.; Downes, L.; Girgis, J.; Kersting, L. J.; Kumar, A.; Lueck, C. J.; McDonough, P. J.; McEllistrem, M. T.; Peters, E. E.; Prados-Estevz, F. M.; Schniederjan, J.; Sidwell, L.; Sigillito, A. J.; Vanhoy, J. R.; Watts, D.; Yates, S. W.
2013-04-01
Nuclear data important for the design and development of the next generation of light-water reactors and future fast reactors include neutron elastic and inelastic scattering cross sections on important structural materials, such as Fe, and on coolant materials, such as Na. These reaction probabilities are needed since neutron reactions impact fuel performance during irradiations and the overall efficiency of reactors. While neutron scattering cross sections from these materials are available for certain incident neutron energies, the fast neutron region, particularly above 2 MeV, has large gaps for which no measurements exist, or the existing uncertainties are large. Measurements have been made at the University of Kentucky Accelerator Laboratory to measure neutron scattering cross sections on both Fe and Na in the region where these gaps occur and to reduce the uncertainties on scattering from the ground state and first excited state of these nuclei. Results from measurements on Fe at incident neutron energies between 2 and 4 MeV will be presented and comparisons will be made to model calculations available from data evaluators.
Functional variability of habitats within the Sacramento-San Joaquin Delta: Restoration implications
Lucas, L.V.; Cloern, J.E.; Thompson, J.K.; Monsen, N.E.
2002-01-01
We have now entered an era of large-scale attempts to restore ecological functions and biological communities in impaired ecosystems. Our knowledge base of complex ecosystems and interrelated functions is limited, so the outcomes of specific restoration actions are highly uncertain. One approach for exploring that uncertainty and anticipating the range of possible restoration outcomes is comparative study of existing habitats similar to future habitats slated for construction. Here we compare two examples of one habitat type targeted for restoration in the Sacramento-San Joaquin River Delta. We compare one critical ecological function provided by these shallow tidal habitats - production and distribution of phytoplankton biomass as the food supply to pelagic consumers. We measured spatial and short-term temporal variability of phytoplankton biomass and growth rate and quantified the hydrodynamic and biological processes governing that variability. Results show that the production and distribution of phytoplankton biomass can be highly variable within and between nearby habitats of the same type, due to variations in phytoplankton sources, sinks, and transport. Therefore, superficially similar, geographically proximate habitats can function very differently, and that functional variability introduces large uncertainties into the restoration process. Comparative study of existing habitats is one way ecosystem science can elucidate and potentially minimize restoration uncertainties, by identifying processes shaping habitat functionality, including those that can be controlled in the restoration design.
High precision predictions for exclusive VH production at the LHC
Li, Ye; Liu, Xiaohui
2014-06-04
We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less
A Statistics-Based Material Property Analysis to Support TPS Characterization
NASA Technical Reports Server (NTRS)
Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.
2012-01-01
Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Sex Differences in Cognitive Abilities Test Scores: A UK National Picture
ERIC Educational Resources Information Center
Strand, Steve; Deary, Ian J.; Smith, Pauline
2006-01-01
Background and aims: There is uncertainty about the extent or even existence of sex differences in the mean and variability of reasoning test scores ( Jensen, 1998; Lynn, 1994, ; Mackintosh, 1996). This paper analyses the Cognitive Abilities Test (CAT) scores of a large and representative sample of UK pupils to determine the extent of any sex…
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
Practical problems in aggregating expert opinions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booker, J.M.; Picard, R.R.; Meyer, M.A.
1993-11-01
Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough aboutmore » the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.« less
Damage assessment of composite plate structures with material and measurement uncertainty
NASA Astrophysics Data System (ADS)
Chandrashekhar, M.; Ganguli, Ranjan
2016-06-01
Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problem in damage assessment. A recently developed C0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data.
Observations of large parallel electric fields in the auroral ionosphere
NASA Technical Reports Server (NTRS)
Mozer, F. S.
1976-01-01
Rocket borne measurements employing a double probe technique were used to gather evidence for the existence of electric fields in the auroral ionosphere having components parallel to the magnetic field direction. An analysis of possible experimental errors leads to the conclusion that no known uncertainties can account for the roughly 10 mV/m parallel electric fields that are observed.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
NASA Technical Reports Server (NTRS)
Ohl, Raymond; Slotwinski, Anthony; Eegholm, Bente; Saif, Babak
2011-01-01
The fabrication of large optics is traditionally a slow process, and fabrication capability is often limited by measurement capability. W hile techniques exist to measure mirror figure with nanometer precis ion, measurements of large-mirror prescription are typically limited to submillimeter accuracy. Using a lidar instrument enables one to measure the optical surface rough figure and prescription in virtuall y all phases of fabrication without moving the mirror from its polis hing setup. This technology improves the uncertainty of mirror presc ription measurement to the micron-regime.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Assessing uncertainties in land cover projections.
Alexander, Peter; Prestele, Reinhard; Verburg, Peter H; Arneth, Almut; Baranzelli, Claudia; Batista E Silva, Filipe; Brown, Calum; Butler, Adam; Calvin, Katherine; Dendoncker, Nicolas; Doelman, Jonathan C; Dunford, Robert; Engström, Kerstin; Eitelberg, David; Fujimori, Shinichiro; Harrison, Paula A; Hasegawa, Tomoko; Havlik, Petr; Holzhauer, Sascha; Humpenöder, Florian; Jacobs-Crisioni, Chris; Jain, Atul K; Krisztin, Tamás; Kyle, Page; Lavalle, Carlo; Lenton, Tim; Liu, Jiayi; Meiyappan, Prasanth; Popp, Alexander; Powell, Tom; Sands, Ronald D; Schaldach, Rüdiger; Stehfest, Elke; Steinbuks, Jevgenijs; Tabeau, Andrzej; van Meijl, Hans; Wise, Marshall A; Rounsevell, Mark D A
2017-02-01
Understanding uncertainties in land cover projections is critical to investigating land-based climate mitigation policies, assessing the potential of climate adaptation strategies and quantifying the impacts of land cover change on the climate system. Here, we identify and quantify uncertainties in global and European land cover projections over a diverse range of model types and scenarios, extending the analysis beyond the agro-economic models included in previous comparisons. The results from 75 simulations over 18 models are analysed and show a large range in land cover area projections, with the highest variability occurring in future cropland areas. We demonstrate systematic differences in land cover areas associated with the characteristics of the modelling approach, which is at least as great as the differences attributed to the scenario variations. The results lead us to conclude that a higher degree of uncertainty exists in land use projections than currently included in climate or earth system projections. To account for land use uncertainty, it is recommended to use a diverse set of models and approaches when assessing the potential impacts of land cover change on future climate. Additionally, further work is needed to better understand the assumptions driving land use model results and reveal the causes of uncertainty in more depth, to help reduce model uncertainty and improve the projections of land cover. © 2016 John Wiley & Sons Ltd.
Uncertainty, robustness, and the value of information in managing a population of northern bobwhites
Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael
2014-01-01
The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about system dynamics and the effects of management.
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
Ilas, Germina; Liljenfeldt, Henrik
2017-05-19
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Liljenfeldt, Henrik
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
Lutz, James A.; Matchett, John R.; Tarnay, Leland W.; Smith, Douglas F.; Becker, Kendall M.L.; Furniss, Tucker J.; Brooks, Matthew L.
2017-01-01
Fire is one of the principal agents changing forest carbon stocks and landscape level distributions of carbon, but few studies have addressed how accurate carbon accounting of fire-killed trees is or can be. We used a large number of forested plots (1646), detailed selection of species-specific and location-specific allometric equations, vegetation type maps with high levels of accuracy, and Monte Carlo simulation to model the amount and uncertainty of aboveground tree carbon present in tree species (hereafter, carbon) within Yosemite and Sequoia & Kings Canyon National Parks. We estimated aboveground carbon in trees within Yosemite National Park to be 25 Tg of carbon (C) (confidence interval (CI): 23–27 Tg C), and in Sequoia & Kings Canyon National Park to be 20 Tg C (CI: 18–21 Tg C). Low-severity and moderate-severity fire had little or no effect on the amount of carbon sequestered in trees at the landscape scale, and high-severity fire did not immediately consume much carbon. Although many of our data inputs were more accurate than those used in similar studies in other locations, the total uncertainty of carbon estimates was still greater than ±10%, mostly due to potential uncertainties in landscape-scale vegetation type mismatches and trees larger than the ranges of existing allometric equations. If carbon inventories are to be meaningfully used in policy, there is an urgent need for more accurate landscape classification methods, improvement in allometric equations for tree species, and better understanding of the uncertainties inherent in existing carbon accounting methods.
Catnip and the alteration of human consciousness.
Osterhoudt, K C; Lee, S K; Callahan, J M; Henretig, F M
1997-12-01
Uncertainty exists regarding the ability of catnip (Nepeta cataria) to affect human consciousness. We report a case of a toddler exhibiting central nervous system depression after consuming a large quantity of catnip. His obtundation was not attributable to another cause. We review the published literature describing the alleged psychoactive capabilities of catnip and present our case as further information for use in this ongoing controversy.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search,more » and discuss a number of possible improvements for each.« less
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.
2016-12-01
Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bisson, M.; Neri, A.; Cioni, R.; Bevilacqua, A.; Aspinall, W. P.
2017-06-01
This study presents new and revised data sets about the spatial distribution of past volcanic vents, eruptive fissures, and regional/local structures of the Somma-Vesuvio volcanic system (Italy). The innovative features of the study are the identification and quantification of important sources of uncertainty affecting interpretations of the data sets. In this regard, the spatial uncertainty of each feature is modeled by an uncertainty area, i.e., a geometric element typically represented by a polygon drawn around points or lines. The new data sets have been assembled as an updatable geodatabase that integrates and complements existing databases for Somma-Vesuvio. The data are organized into 4 data sets and stored as 11 feature classes (points and lines for feature locations and polygons for the associated uncertainty areas), totaling more than 1700 elements. More specifically, volcanic vent and eruptive fissure elements are subdivided into feature classes according to their associated eruptive styles: (i) Plinian and sub-Plinian eruptions (i.e., large- or medium-scale explosive activity); (ii) violent Strombolian and continuous ash emission eruptions (i.e., small-scale explosive activity); and (iii) effusive eruptions (including eruptions from both parasitic vents and eruptive fissures). Regional and local structures (i.e., deep faults) are represented as linear feature classes. To support interpretation of the eruption data, additional data sets are provided for Somma-Vesuvio geological units and caldera morphological features. In the companion paper, the data presented here, and the associated uncertainties, are used to develop a first vent opening probability map for the Somma-Vesuvio caldera, with specific attention focused on large or medium explosive events.
NASA Technical Reports Server (NTRS)
Minzner, R. A.
1976-01-01
A graph was developed for relating delta T/T, the relative uncertainty in atmospheric temperature T, to delta p/p, the relative uncertainty in the atmospheric pressure p, for situations, when T is derived from the slope of the pressure-height profile. A similar graph relates delta T/T to delta roh/rho, the relative uncertainty in the atmospheric density rho, for those cases when T is derived from the downward integration of the density-height profile. A comparison of these two graphs shows that for equal uncertainties in the respective basic parameters, p or rho, smaller uncertainties in the derived temperatures are associated with density-height rather than with pressure-height data. The value of delta T/T is seen to depend not only upon delta p or delta rho, and to a small extent upon the value of T or the related scale height H, but also upon the inverse of delta h, the height increment between successive observations of p or rho. In the case of pressure-height data, delta T/T is dominated by 1/delta h for all values of delta h; for density-height data, delta T/T is dominated by delta rho/rho for delta h smaller than about 5 km. In the case of T derived from density-height data, this inverse relationship between delta T/T and delta h applies only for large values of delta h, that is, for delta h 35 km. No limit exists in the fineness of usable height resolution of T which may be derived from densities, while a fine height resolution in pressure-height data leads to temperature with unacceptably large uncertainties.
Quantifying measurement uncertainty and spatial variability in the context of model evaluation
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.
2017-12-01
In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigase, Yves
2007-07-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less
Do regional methods really help reduce uncertainties in flood frequency analyses?
NASA Astrophysics Data System (ADS)
Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric
2013-04-01
Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T
2015-01-01
Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less
NASA Astrophysics Data System (ADS)
Thoemel, J.; Cosson, E.; Chazot, O.
2009-01-01
In the framework of the creation of an aerothermodynamic database for the design the Intermediate Experimental Vehicle, surface properties of heat shield materials that represent the boundary conditions are reviewed. Catalytic and radiative characteristics available in the literature are critically analyzed and summarized. It turns out that large uncertainties on the parameters exist. Finally, simple and conservative values are proposed.
Spillway sizing of large dams in Austria
NASA Astrophysics Data System (ADS)
Reszler, Ch.; Gutknecht, D.; Blöschl, G.
2003-04-01
This paper discusses the basic philosophy of defining and calculating design floods for large dams in Austria, both for the construction of new dams and for a re-assessment of the safety of existing dams. Currently the consensus is to choose flood peak values corresponding to a probability of exceedance of 2*10-4 for a given year. A two step procedure is proposed to estimate the design flood discharges - a rapid assessment and a detailed assessment. In the rapid assessment the design discharge is chosen as a constant multiple of flood values read from a map of regionalised floods. The safety factor or multiplier takes care of the uncertainties of the local estimation and the regionalisation procedure. If the current design level of a spillway exceeds the value so estimated, no further calculations are needed. Otherwise (and for new dams) a detailed assessment is required. The idea of the detailed assessment is to draw upon all existing sources of information to constrain the uncertainties. The three main sources are local flood frequency analysis, where flood data are available; regional flood estimation from hydrologically similar catchments; and rainfall-runoff modelling using design storms as inputs. The three values obtained by these methods are then assessed and weighted in terms of their reliability to facilitate selection of the design flood. The uncertainty assessment of the various methods is based on confidence intervals, estimates of regional heterogeneity, data availability and sensitivity analyses of the rainfall-runoff model. As the definition of the design floods discussed above is based on probability concepts it is also important to examine the excess risk, i.e. the possibility of the occurrence of a flood exceeding the design levels. The excess risk is evaluated based on a so called Safety Check Flood (SCF), similar to the existing practice in other countries in Europe. The SCF is a vehicle to analyse the damage potential of an event of this magnitude. This is to provide guidance for protective measures to dealing with very extreme floods. The SCF is used to check the vulnerability of the system with regard to structural stability, morphological effects, etc., and to develop alarm plans and disaster mitigation procedures. The basis for estimating the SCF are the uncertainty assessments of the design flood values estimated by the three methods including unlikely combinations of the controlling factors and attending uncertainties. Finally we discuss the impact on the downstream valley of floods exceeding the design values and of smaller floods and illustrate the basic concepts by examples from the recent flood in August 2002.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
An overview of expert systems. [artificial intelligence
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.
Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...
2006-10-05
[1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less
Decorrelated jet substructure tagging using adversarial neural networks
NASA Astrophysics Data System (ADS)
Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas
2017-10-01
We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.
NASA Astrophysics Data System (ADS)
Schumann, G.
2016-12-01
Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
Medieval warming initiated exceptionally large wildfire outbreaks in the Rocky Mountains
Calder, W. John; Parker, Dusty; Stopka, Cody J.; Jiménez-Moreno, Gonzalo; Shuman, Bryan N.
2015-01-01
Many of the largest wildfires in US history burned in recent decades, and climate change explains much of the increase in area burned. The frequency of extreme wildfire weather will increase with continued warming, but many uncertainties still exist about future fire regimes, including how the risk of large fires will persist as vegetation changes. Past fire-climate relationships provide an opportunity to constrain the related uncertainties, and reveal widespread burning across large regions of western North America during past warm intervals. Whether such episodes also burned large portions of individual landscapes has been difficult to determine, however, because uncertainties with the ages of past fires and limited spatial resolution often prohibit specific estimates of past area burned. Accounting for these challenges in a subalpine landscape in Colorado, we estimated century-scale fire synchroneity across 12 lake-sediment charcoal records spanning the past 2,000 y. The percentage of sites burned only deviated from the historic range of variability during the Medieval Climate Anomaly (MCA) between 1,200 and 850 y B.P., when temperatures were similar to recent decades. Between 1,130 and 1,030 y B.P., 83% (median estimate) of our sites burned when temperatures increased ∼0.5 °C relative to the preceding centuries. Lake-based fire rotation during the MCA decreased to an estimated 120 y, representing a 260% higher rate of burning than during the period of dendroecological sampling (360 to −60 y B.P.). Increased burning, however, did not persist throughout the MCA. Burning declined abruptly before temperatures cooled, indicating possible fuel limitations to continued burning. PMID:26438834
Uncertainty in aerosol hygroscopicity resulting from semi-volatile organic compounds
NASA Astrophysics Data System (ADS)
Goulden, Olivia; Crooks, Matthew; Connolly, Paul
2018-01-01
We present a novel method of exploring the effect of uncertainties in aerosol properties on cloud droplet number using existing cloud droplet activation parameterisations. Aerosol properties of a single involatile particle mode are randomly sampled within an uncertainty range and resulting maximum supersaturations and critical diameters calculated using the cloud droplet activation scheme. Hygroscopicity parameters are subsequently derived and the values of the mean and uncertainty are found to be comparable to experimental observations. A recently proposed cloud droplet activation scheme that includes the effects of co-condensation of semi-volatile organic compounds (SVOCs) onto a single lognormal mode of involatile particles is also considered. In addition to the uncertainties associated with the involatile particles, concentrations, volatility distributions and chemical composition of the SVOCs are randomly sampled and hygroscopicity parameters are derived using the cloud droplet activation scheme. The inclusion of SVOCs is found to have a significant effect on the hygroscopicity and contributes a large uncertainty. For non-volatile particles that are effective cloud condensation nuclei, the co-condensation of SVOCs reduces their actual hygroscopicity by approximately 25 %. A new concept of an effective hygroscopicity parameter is introduced that can computationally efficiently simulate the effect of SVOCs on cloud droplet number concentration without direct modelling of the organic compounds. These effective hygroscopicities can be as much as a factor of 2 higher than those of the non-volatile particles onto which the volatile organic compounds condense.
NASA Astrophysics Data System (ADS)
White, J. R.; DeLaune, R. D.; Roy, E. D.; Corstanje, R.
2014-12-01
The highly visible phenomenon of wetland loss in coastal Louisiana (LA) is examined through the prism of carbon accumulation, wetland loss and greenhouse gas (GHG) emissions. The Mississippi River Deltaic region experiences higher relative sea level rise due to coupled subsidence and eustatic sea level rise allowing this region to serve as a proxy for future projected golbal sea level rise. Carbon storage or sequestration in rapidly subsiding LA coastal marsh soils is based on vertical marsh accretion and areal change data. While coastal marshes sequester significant amount of carbon through vertical accretion, large amounts of carbon, previously sequested in the soil profile is lost through annual deterioration of these coastal marshes as well as through GHG emissions. Efforts are underway in Louisiana to access the carbon credit market in order to provide significant funding for coastal restoration projects. However, there is very large uncertainty on GHG emission rates related to both marsh type and temporal (daily and seasonal) effects. Very little data currently exists which addresses this uncertainty which can significantly affect the carbon credit value of a particular wetland system. We provide an analysis of GHG emission rates for coastal freshwater, brackish and and salt marshes compared to the net soil carbon sequestration rate. Results demonstrate that there is very high uncertainty on GHG emissions which can substantially alter the carbon credit value of a particular wetland system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, William A., E-mail: wadawson@ucdavis.edu
2013-08-01
Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less
Predicting long-range transport: a systematic evaluation of two multimedia transport models.
Bennett, D H; Scheringer, M; McKone, T E; Hungerbühler, K
2001-03-15
The United Nations Environment Program has recently developed criteria to identify and restrict chemicals with a potential for persistence and long-range transport (persistent organic pollutants or POPs). There are many stakeholders involved, and the issues are not only scientific but also include social, economic, and political factors. This work focuses on one aspect of the POPs debate, the criteria for determining the potential for long-range transport (LRT). Our goal is to determine if current models are reliable enough to support decisions that classify a chemical based on the LRT potential. We examine the robustness of two multimedia fate models for determining the relative ranking and absolute spatial range of various chemicals in the environment. We also consider the effect of parameter uncertainties and the model uncertainty associated with the selection of an algorithm for gas-particle partitioning on the model results. Given the same chemical properties, both models give virtually the same ranking. However, when chemical parameter uncertainties and model uncertainties such as particle partitioning are considered, the spatial range distributions obtained for the individual chemicals overlap, preventing a distinct rank order. The absolute values obtained for the predicted spatial range or travel distance differ significantly between the two models for the uncertainties evaluated. We find that to evaluate a chemical when large and unresolved uncertainties exist, it is more informative to use two or more models and include multiple types of uncertainty. Model differences and uncertainties must be explicitly confronted to determine how the limitations of scientific knowledge impact predictions in the decision-making process.
Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping
2014-01-01
Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.
NASA Technical Reports Server (NTRS)
Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron
1997-01-01
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.
Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan; ...
2017-10-20
Large uncertainties exist in predicting responses of wetland methane (CH 4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH 4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data formore » forecasting CH 4 emission under five warming and two elevated CO 2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO 2 treatments stimulated methane emission by 10.4%–23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH 4:CO 2 ratio. This result, together with the larger warming potential of CH 4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. In conclusion, the model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.« less
NASA Astrophysics Data System (ADS)
Wu, D.; Lin, J. C.; Oda, T.; Ye, X.; Lauvaux, T.; Yang, E. G.; Kort, E. A.
2017-12-01
Urban regions are large emitters of CO2 whose emission inventories are still associated with large uncertainties. Therefore, a strong need exists to better quantify emissions from megacities using a top-down approach. Satellites — e.g., the Orbiting Carbon Observatory 2 (OCO-2), provide a platform for monitoring spatiotemporal column CO2 concentrations (XCO2). In this study, we present a Lagrangian receptor-oriented model framework and evaluate "model-retrieved" XCO2 by comparing against OCO-2-retrieved XCO2, for three megacities/regions (Riyadh, Cairo and Pearl River Delta). OCO-2 soundings indicate pronounced XCO2 enhancements (dXCO2) when crossing Riyadh, which are successfully captured by our model with a slight latitude shift. From this model framework, we can identify and compare the relative contributions of dXCO2 resulted from anthropogenic emission versus biospheric fluxes. In addition, to impose constraints on emissions for Riyadh through inversion methods, three uncertainties sources are addressed in this study, including 1) transport errors, 2) receptor and model setups in atmospheric models, and 3) urban emission uncertainties. For 1), we calculate transport errors by adding a wind error component to randomize particle distributions. For 2), a set of sensitivity tests using bootstrap method is performed to describe proper ways to setup receptors in Lagrangian models. For 3), both emission uncertainties from the Fossil Fuel Data Assimilation System (FFDAS) and the spread among three emission inventories are used to approximate an overall fractional uncertainty in modeled anthropogenic signal (dXCO2.anthro). Lastly, we investigate the definition of background (clean) XCO2 for megacities from retrieved XCO2 by means of statistical tools and our model framework.
NASA Astrophysics Data System (ADS)
Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan; Shi, Zheng; Wilson, Rachel M.; Ricciuto, Daniel; Sebestyen, Stephen D.; Hanson, Paul J.; Luo, Yiqi
2017-11-01
Large uncertainties exist in predicting responses of wetland methane (CH4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data for forecasting CH4 emission under five warming and two elevated CO2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO2 treatments stimulated methane emission by 10.4%-23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH4:CO2 ratio. This result, together with the larger warming potential of CH4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. The model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan
Large uncertainties exist in predicting responses of wetland methane (CH 4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH 4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data formore » forecasting CH 4 emission under five warming and two elevated CO 2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO 2 treatments stimulated methane emission by 10.4%–23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH 4:CO 2 ratio. This result, together with the larger warming potential of CH 4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. In conclusion, the model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.« less
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Dall'Olmo, Giorgio; Brewin, Robert J W; Nencioli, Francesco; Organelli, Emanuele; Lefering, Ina; McKee, David; Röttgers, Rüdiger; Mitchell, Catherine; Boss, Emmanuel; Bricaud, Annick; Tilstone, Gavin
2017-11-27
Measurements of the absorption coefficient of chromophoric dissolved organic matter (ay) are needed to validate existing ocean-color algorithms. In the surface open ocean, these measurements are challenging because of low ay values. Yet, existing global datasets demonstrate that ay could contribute between 30% to 50% of the total absorption budget in the 400-450 nm spectral range, thus making accurate measurement of ay essential to constrain these uncertainties. In this study, we present a simple way of determining ay using a commercially-available in-situ spectrophotometer operated in underway mode. The obtained ay values were validated using independent collocated measurements. The method is simple to implement, can provide measurements with very high spatio-temporal resolution, and has an accuracy of about 0.0004 m -1 and a precision of about 0.0025 m -1 when compared to independent data (at 440 nm). The only limitation for using this method at sea is that it relies on the availability of relatively large volumes of ultrapure water. Despite this limitation, the method can deliver the ay data needed for validating and assessing uncertainties in ocean-colour algorithms.
NASA Astrophysics Data System (ADS)
Huang, Danqing; Yan, Peiwen; Zhu, Jian; Zhang, Yaocun; Kuang, Xueyuan; Cheng, Jing
2018-04-01
The uncertainty of global summer precipitation simulated by the 23 CMIP5 CGCMs and the possible impacts of model resolutions are investigated in this study. Large uncertainties exist over the tropical and subtropical regions, which can be mainly attributed to convective precipitation simulation. High-resolution models (HRMs) and low-resolution models (LRMs) are further investigated to demonstrate their different contributions to the uncertainties of the ensemble mean. It shows that the high-resolution model ensemble means (HMME) and low-resolution model ensemble mean (LMME) mitigate the biases between the MME and observation over most continents and oceans, respectively. The HMME simulates more precipitation than the LMME over most oceans, but less precipitation over some continents. The dominant precipitation category in the HRMs (LRMs) is the heavy precipitation (moderate precipitation) over the tropic regions. The combinations of convective and stratiform precipitation are also quite different: the HMME has much higher ratio of stratiform precipitation while the LMME has more convective precipitation. Finally, differences in precipitation between the HMME and LMME can be traced to their differences in the SST simulations via the local and remote air-sea interaction.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Uncertainty relations as Hilbert space geometry
NASA Technical Reports Server (NTRS)
Braunstein, Samuel L.
1994-01-01
Precision measurements involve the accurate determination of parameters through repeated measurements of identically prepared experimental setups. For many parameters there is a 'natural' choice for the quantum observable which is expected to give optimal information; and from this observable one can construct an Heinsenberg uncertainty principle (HUP) bound on the precision attainable for the parameter. However, the classical statistics of multiple sampling directly gives us tools to construct bounds for the precision available for the parameters of interest (even when no obvious natural quantum observable exists, such as for phase, or time); it is found that these direct bounds are more restrictive than those of the HUP. The implication is that the natural quantum observables typically do not encode the optimal information (even for observables such as position, and momentum); we show how this can be understood simply in terms of the Hilbert space geometry. Another striking feature of these bounds to parameter uncertainty is that for a large enough number of repetitions of the measurements all V quantum states are 'minimum uncertainty' states - not just Gaussian wave-packets. Thus, these bounds tell us what precision is achievable as well as merely what is allowed.
ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.
Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer
2018-06-26
Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Rogers, Donald W; Zavitsas, Andreas A
2017-01-06
Despite their abundance in nature and their importance in biology, medicine, nutrition, and in industry, gas phase enthalpies of formation of many long chain saturated and unsaturated fatty acids and of dicarboxylic acids are either unavailable or have been estimated with large uncertainties. Available experimental values for stearic acid show a spread of 68 kJ mol -1 . This work fills the knowledge gap by obtaining reliable values by quantum theoretical calculations using G4 model chemistry. Compounds with up to 20 carbon atoms are treated. The theoretical results are in excellent agreement with well established experimental values when such values exist, and they provide a large number of previously unavailable values.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Performance-Oriented Privacy-Preserving Data Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pon, R K; Critchlow, T
2004-09-15
Current solutions to integrating private data with public data have provided useful privacy metrics, such as relative information gain, that can be used to evaluate alternative approaches. Unfortunately, they have not addressed critical performance issues, especially when the public database is very large. The use of hashes and noise yields better performance than existing techniques while still making it difficult for unauthorized entities to distinguish which data items truly exist in the private database. As we show here, leveraging the uncertainty introduced by collisions caused by hashing and the injection of noise, we present a technique for performing a relationalmore » join operation between a massive public table and a relatively smaller private one.« less
Solar rotation effects on the thermospheres of Mars and Earth.
Forbes, Jeffrey M; Bruinsma, Sean; Lemoine, Frank G
2006-06-02
The responses of Earth's and Mars' thermospheres to the quasi-periodic (27-day) variation of solar flux due to solar rotation were measured contemporaneously, revealing that this response is twice as large for Earth as for Mars. Per typical 20-unit change in 10.7-centimeter radio flux (used as a proxy for extreme ultraviolet flux) reaching each planet, we found temperature changes of 42.0 +/- 8.0 kelvin and 19.2 +/- 3.6 kelvin for Earth and Mars, respectively. Existing data for Venus indicate values of 3.6 +/- 0.6 kelvin. Our observational result constrains comparative planetary thermosphere simulations and may help resolve existing uncertainties in thermal balance processes, particularly CO2 cooling.
NASA Astrophysics Data System (ADS)
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.
Gao, Xueping; Liu, Yinzhu; Sun, Bowen
2018-06-05
The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.
NASA Astrophysics Data System (ADS)
Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien
2015-04-01
The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.
Simon, Heather; Allen, David T; Wittig, Ann E
2008-02-01
Emissions inventories of fine particulate matter (PM2.5) were compared with estimates of emissions based on data emerging from U.S. Environment Protection Agency Particulate Matter Supersites and other field programs. Six source categories for PM2.5 emissions were reviewed: on-road mobile sources, nonroad mobile sources, cooking, biomass combustion, fugitive dust, and stationary sources. Ammonia emissions from all of the source categories were also examined. Regional emissions inventories of PM in the exhaust from on-road and nonroad sources were generally consistent with ambient observations, though uncertainties in some emission factors were twice as large as the emission factors. In contrast, emissions inventories of road dust were up to an order of magnitude larger than ambient observations, and estimated brake wear and tire dust emissions were half as large as ambient observations in urban areas. Although comprehensive nationwide emissions inventories of PM2.5 from cooking sources and biomass burning are not yet available, observational data in urban areas suggest that cooking sources account for approximately 5-20% of total primary emissions (excluding dust), and biomass burning sources are highly dependent on region. Finally, relatively few observational data were available to assess the accuracy of emission estimates for stationary sources. Overall, the uncertainties in primary emissions for PM2.s are substantial. Similar uncertainties exist for ammonia emissions. Because of these uncertainties, the design of PM2.5 control strategies should be based on inventories that have been refined by a combination of bottom-up and top-down methods.
The Impact of Uncertainty and Irreversibility on Investments in Online Learning
ERIC Educational Resources Information Center
Oslington, Paul
2004-01-01
Uncertainty and irreversibility are central to online learning projects, but have been neglected in the existing educational cost-benefit analysis literature. This paper builds some simple illustrative models of the impact of irreversibility and uncertainty, and shows how different types of cost and demand uncertainty can have substantial impacts…
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Capon, Adam; Gillespie, James; Rolfe, Margaret; Smith, Wayne
2015-04-26
Policy makers and regulators are constantly required to make decisions despite the existence of substantial uncertainty regarding the outcomes of their proposed decisions. Understanding stakeholder views is an essential part of addressing this uncertainty, which provides insight into the possible social reactions and tolerance of unpredictable risks. In the field of nanotechnology, large uncertainties exist regarding the real and perceived risks this technology may have on society. Better evidence is needed to confront this issue. We undertook a computer assisted telephone interviewing (CATI) survey of the Australian public and a parallel survey of those involved in nanotechnology from the academic, business and government sectors. Analysis included comparisons of proportions and logistic regression techniques. We explored perceptions of nanotechnology risks both to health and in a range of products. We examined views on four trust actors. The general public's perception of risk was significantly higher than that expressed by other stakeholders. The public bestows less trust in certain trust actors than do academics or government officers, giving its greatest trust to scientists. Higher levels of public trust were generally associated with lower perceptions of risk. Nanotechnology in food and cosmetics/sunscreens were considered riskier applications irrespective of stakeholder, while familiarity with nanotechnology was associated with a reduced risk perception. Policy makers should consider the disparities in risk and trust perceptions between the public and influential stakeholders, placing greater emphasis on risk communication and the uncertainties of risk assessment in these areas of higher concern. Scientists being the highest trusted group are well placed to communicate the risks of nanotechnologies to the public.
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Jacobson, A. R.; Basu, S.; Weir, B.; Baker, D. F.; Bowman, K. W.; Chevallier, F.; Crowell, S.; Deng, F.; Denning, S.; Feng, L.; Liu, J.
2017-12-01
The orbiting carbon observatory (OCO-2) was launched in July 2014 and has collected three years of column mean CO2 (XCO2) data. The OCO-2 model inter-comparison project (MIP) was formed to provide a means of analysis of results from many different atmospheric inversion modeling systems. Certain facets of the inversion systems, such as observations and fossil fuel CO2 fluxes were standardized to remove first order sources of difference between the systems. Nevertheless, large variations amongst the flux results from the systems still exist. In this presentation, we explore one dimension of this uncertainty, the impact of different atmospheric transport fields, i.e. wind speeds and directions. Early results illustrate a large systematic difference between two classes of atmospheric transport, arising from winds in the parent GEOS-DAS (NASA-GMAO) and ERA-Interim (ECMWF) data assimilation models. We explore these differences and their effect on inversion-based estimates of surface CO2 flux by using a combination of simplified inversion techniques as well as the full OCO-2 MIP suite of CO2 flux estimates.
NASA Astrophysics Data System (ADS)
Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.
2015-12-01
The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.
Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.
2013-01-01
The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.
Multi-model ensembles for assessment of flood losses and associated uncertainty
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi
2018-05-01
Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.
NASA Astrophysics Data System (ADS)
Bartlett, Rachel E.; Bollasina, Massimo A.; Booth, Ben B. B.; Dunstone, Nick J.; Marenco, Franco; Messori, Gabriele; Bernie, Dan J.
2018-03-01
Anthropogenic aerosols could dominate over greenhouse gases in driving near-term hydroclimate change, especially in regions with high present-day aerosol loading such as Asia. Uncertainties in near-future aerosol emissions represent a potentially large, yet unexplored, source of ambiguity in climate projections for the coming decades. We investigated the near-term sensitivity of the Asian summer monsoon to aerosols by means of transient modelling experiments using HadGEM2-ES under two existing climate change mitigation scenarios selected to have similar greenhouse gas forcing, but to span a wide range of plausible global sulfur dioxide emissions. Increased sulfate aerosols, predominantly from East Asian sources, lead to large regional dimming through aerosol-radiation and aerosol-cloud interactions. This results in surface cooling and anomalous anticyclonic flow over land, while abating the western Pacific subtropical high. The East Asian monsoon circulation weakens and precipitation stagnates over Indochina, resembling the observed southern-flood-northern-drought pattern over China. Large-scale circulation adjustments drive suppression of the South Asian monsoon and a westward extension of the Maritime Continent convective region. Remote impacts across the Northern Hemisphere are also generated, including a northwestward shift of West African monsoon rainfall induced by the westward displacement of the Indian Ocean Walker cell, and temperature anomalies in northern midlatitudes linked to propagation of Rossby waves from East Asia. These results indicate that aerosol emissions are a key source of uncertainty in near-term projection of regional and global climate; a careful examination of the uncertainties associated with aerosol pathways in future climate assessments must be highly prioritised.
A probabilistic framework for single-station location of seismicity on Earth and Mars
NASA Astrophysics Data System (ADS)
Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.
2017-01-01
Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.
Uncertainty loops in travel-time tomography from nonlinear wave physics.
Galetti, Erica; Curtis, Andrew; Meles, Giovanni Angelo; Baptie, Brian
2015-04-10
Estimating image uncertainty is fundamental to guiding the interpretation of geoscientific tomographic maps. We reveal novel uncertainty topologies (loops) which indicate that while the speeds of both low- and high-velocity anomalies may be well constrained, their locations tend to remain uncertain. The effect is widespread: loops dominate around a third of United Kingdom Love wave tomographic uncertainties, changing the nature of interpretation of the observed anomalies. Loops exist due to 2nd and higher order aspects of wave physics; hence, although such structures must exist in many tomographic studies in the physical sciences and medicine, they are unobservable using standard linearized methods. Higher order methods might fruitfully be adopted.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Management of California Oak Woodlands: Uncertainties and Modeling
Jay E. Noel; Richard P. Thompson
1995-01-01
A mathematical policy model of oak woodlands is presented. The model illustrates the policy uncertainties that exist in the management of oak woodlands. These uncertainties include: (1) selection of a policy criterion function, (2) woodland dynamics, (3) initial and final state of the woodland stock. The paper provides a review of each of the uncertainty issues. The...
NASA Astrophysics Data System (ADS)
Ganguly, S.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Milesi, C.; Votava, P.; Nemani, R. R.
2013-12-01
An unresolved issue with coarse-to-medium resolution satellite-based forest carbon mapping over regional to continental scales is the high level of uncertainty in above ground biomass (AGB) estimates caused by the absence of forest cover information at a high enough spatial resolution (current spatial resolution is limited to 30-m). To put confidence in existing satellite-derived AGB density estimates, it is imperative to create continuous fields of tree cover at a sufficiently high resolution (e.g. 1-m) such that large uncertainties in forested area are reduced. The proposed work will provide means to reduce uncertainty in present satellite-derived AGB maps and Forest Inventory and Analysis (FIA) based regional estimates. Our primary objective will be to create Very High Resolution (VHR) estimates of tree cover at a spatial resolution of 1-m for the Continental United States using all available National Agriculture Imaging Program (NAIP) color-infrared imagery from 2010 till 2012. We will leverage the existing capabilities of the NASA Earth Exchange (NEX) high performance computing and storage facilities. The proposed 1-m tree cover map can be further aggregated to provide percent tree cover at any medium-to-coarse resolution spatial grid, which will aid in reducing uncertainties in AGB density estimation at the respective grid and overcome current limitations imposed by medium-to-coarse resolution land cover maps. We have implemented a scalable and computationally-efficient parallelized framework for tree-cover delineation - the core components of the algorithm [that] include a feature extraction process, a Statistical Region Merging image segmentation algorithm and a classification algorithm based on Deep Belief Network and a Feedforward Backpropagation Neural Network algorithm. An initial pilot exercise has been performed over the state of California (~11,000 scenes) to create a wall-to-wall 1-m tree cover map and the classification accuracy has been assessed. Results show an improvement in accuracy of tree-cover delineation as compared to existing forest cover maps from NLCD, especially over fragmented, heterogeneous and urban landscapes. Estimates of VHR tree cover will complement and enhance the accuracy of present remote-sensing based AGB modeling approaches and forest inventory based estimates at both national and local scales. A requisite step will be to characterize the inherent uncertainties in tree cover estimates and propagate them to estimate AGB.
The Influence of Weight-of-Evidence Messages on (Vaccine) Attitudes: A Sequential Mediation Model.
Clarke, Christopher E; Weberling McKeever, Brooke; Holton, Avery; Dixon, Graham N
2015-01-01
Media coverage of contentious risk issues often features competing claims about whether a risk exists and what scientific evidence shows, and journalists often cover these issues by presenting both sides. However, for topics defined by scientific agreement, balanced coverage erroneously heightens uncertainty about scientific information and the issue itself. In this article, we extend research on combating so-called information and issue uncertainty using weight of evidence, drawing on the discredited autism-vaccine link as a case study. We examine whether people's perceptions of issue uncertainty (about whether a link exists) change before and after they encounter a news message with weight-of-evidence information. We also explore whether message exposure is associated with broader issue judgments, specifically vaccine attitudes. Participants (n = 181) read news articles that included or omitted weight-of-evidence content stating that scientific studies have found no link and that scientists agree that none exists. Postexposure issue uncertainty decreased-in other words, issue certainty increased-from preexposure levels across all conditions. Moreover, weight-of-evidence messages were associated with positive vaccine attitudes indirectly via reduced information uncertainty (i.e., one's belief that scientific opinion and evidence concerning a potential link is unclear) as well as issue uncertainty. We discuss implications for risk communication.
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
2017-03-02
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Milanović, Jovica V
2017-08-13
Future power systems will be significantly different compared with their present states. They will be characterized by an unprecedented mix of a wide range of electricity generation and transmission technologies, as well as responsive and highly flexible demand and storage devices with significant temporal and spatial uncertainty. The importance of probabilistic approaches towards power system stability analysis, as a subsection of power system studies routinely carried out by power system operators, has been highlighted in previous research. However, it may not be feasible (or even possible) to accurately model all of the uncertainties that exist within a power system. This paper describes for the first time an integral approach to probabilistic stability analysis of power systems, including small and large angular stability and frequency stability. It provides guidance for handling uncertainties in power system stability studies and some illustrative examples of the most recent results of probabilistic stability analysis of uncertain power systems.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Cross Section Measurement for the 95Mo(n, {alpha})92Zr Reaction at 4.0, 5.0 and 6.0 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guohui; Wu, Hao; Zhang, Jiaguo
2011-01-01
For the {sup 95}Mo(n, {alpha}){sup 92}Zr reaction cross section, there is only one experimental datum in the MeV neutron energy region with large uncertainty. As a result, very large deviations exist in different evaluated nuclear data libraries. This paper report the measurement of cross sections of the {sup 95}Mo(n, {alpha}){sup 92}Zr reaction at En = 4.0, 5.0 and 6.0 MeV. Experiments were performed at the 4.5 MV Van de Graaff of Peking University, China. A twin gridded ionization chamber was used as alpha particle detector and two large area {sup 95}Mo samples placed back to back were adopted. Fast neutronsmore » were produced through the D(d, n){sup 3}He reaction by using a deuterium gas target. A small {sup 238}U fission chamber was adopted for absolute neutron flux determination and a BF{sub 3} long counter was used for neutron flux monitor. Present experimental data are compared with existing evaluations and measurement.« less
Solving Large Problems Quickly: Progress in 2001-2003
NASA Technical Reports Server (NTRS)
Mowry, Todd C.; Colohan, Christopher B.; Brown, Angela Demke; Steffan, J. Gregory; Zhai, Antonia
2004-01-01
This document describes the progress we have made and the lessons we have learned in 2001 through 2003 under the NASA grant entitled "Solving Important Problems Faster". The long-term goal of this research is to accelerate large, irregular scientific applications which have enormous data sets and which are difficult to parallelize. To accomplish this goal, we are exploring two complementary techniques: (i) using compiler-inserted prefetching to automatically hide the I/O latency of accessing these large data sets from disk; and (ii) using thread-level data speculation to enable the optimistic parallelization of applications despite uncertainty as to whether data dependences exist between the resulting threads which would normally make them unsafe to execute in parallel. Overall, we made significant progress in 2001 through 2003, and the project has gone well.
NASA Technical Reports Server (NTRS)
Flat, A.; Milnes, A. G.
1978-01-01
In scanning electron microscope (SEM) injection measurements of minority carrier diffusion lengths some uncertainties of interpretation exist when the response current is nonlinear with distance. This is significant in epitaxial layers where the layer thickness is not large in relation to the diffusion length, and where there are large surface recombination velocities on the incident and contact surfaces. An image method of analysis is presented for such specimens. A method of using the results to correct the observed response in a simple convenient way is presented. The technique is illustrated with reference to measurements in epitaxial layers of GaAs. Average beam penetration depth may also be estimated from the curve shape.
The impact of baryonic matter on gravitational lensing by galaxy clusters
NASA Astrophysics Data System (ADS)
Lee, Brandyn E.; King, Lindsay; Applegate, Douglas; McCarthy, Ian
2017-01-01
Since the bulk of the matter comprising galaxy clusters exists in the form of dark matter, gravitational N-body simulations have historically been an effective way to investigate large scale structure formation and the astrophysics of galaxy clusters. However, upcoming telescopes such as the Large Synoptic Survey Telescope are expected to have lower systematic errors than older generations, reducing measurement uncertainties and requiring that astrophysicists better quantify the impact of baryonic matter on the cluster lensing signal. Here we outline the effects of baryonic processes on cluster density profiles and on weak lensing mass and concentration estimates. Our analysis is done using clusters grown in the suite of cosmological hydrodynamical simulations known as cosmo-OWLS.
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
A transient stochastic weather generator incorporating climate model uncertainty
NASA Astrophysics Data System (ADS)
Glenis, Vassilis; Pinamonti, Valentina; Hall, Jim W.; Kilsby, Chris G.
2015-11-01
Stochastic weather generators (WGs), which provide long synthetic time series of weather variables such as rainfall and potential evapotranspiration (PET), have found widespread use in water resources modelling. When conditioned upon the changes in climatic statistics (change factors, CFs) predicted by climate models, WGs provide a useful tool for climate impacts assessment and adaption planning. The latest climate modelling exercises have involved large numbers of global and regional climate models integrations, designed to explore the implications of uncertainties in the climate model formulation and parameter settings: so called 'perturbed physics ensembles' (PPEs). In this paper we show how these climate model uncertainties can be propagated through to impact studies by testing multiple vectors of CFs, each vector derived from a different sample from a PPE. We combine this with a new methodology to parameterise the projected time-evolution of CFs. We demonstrate how, when conditioned upon these time-dependent CFs, an existing, well validated and widely used WG can be used to generate non-stationary simulations of future climate that are consistent with probabilistic outputs from the Met Office Hadley Centre's Perturbed Physics Ensemble. The WG enables extensive sampling of natural variability and climate model uncertainty, providing the basis for development of robust water resources management strategies in the context of a non-stationary climate.
A kriging metamodel-assisted robust optimization method based on a reverse model
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
Uncertainty in estimates of the number of extraterrestrial civilizations
NASA Technical Reports Server (NTRS)
Sturrock, P. A.
1980-01-01
An estimation of the number N of communicative civilizations is made by means of Drake's formula which involves the combination of several quantities, each of which is to some extent uncertain. It is shown that the uncertainty in any quantity may be represented by a probability distribution function, even if that quantity is itself a probability. The uncertainty of current estimates of N is derived principally from uncertainty in estimates of the lifetime of advanced civilizations. It is argued that this is due primarily to uncertainty concerning the existence of a Galactic Federation which is in turn contingent upon uncertainty about whether the limitations of present-day physics are absolute or (in the event that there exists a yet undiscovered hyperphysics) transient. It is further argued that it is advantageous to consider explicitly these underlying assumptions in order to compare the probable numbers of civilizations operating radio beacons, permitting radio leakage, dispatching probes for radio surveillance for dispatching vehicles for manned surveillance.
Multisource Estimation of Long-term Global Terrestrial Surface Radiation
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.
2017-12-01
Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.
The epistemological status of general circulation models
NASA Astrophysics Data System (ADS)
Loehle, Craig
2018-03-01
Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
NASA Astrophysics Data System (ADS)
Rautman, C. A.; Treadway, A. H.
1991-11-01
Regulatory geologists are concerned with predicting the performance of sites proposed for waste disposal or for remediation of existing pollution problems. Geologic modeling of these sites requires large-scale expansion of knowledge obtained from very limited sampling. This expansion induces considerable uncertainty into the geologic models of rock properties that are required for modeling the predicted performance of the site. One method for assessing this uncertainty is through nonparametric geostatistical simulation. Simulation can produce a series of equiprobable models of a rock property of interest. Each model honors measured values at sampled locations, and each can be constructed to emulate both the univariate histogram and the spatial covariance structure of the measured data. Computing a performance model for a number of geologic simulations allows evaluation of the effects of geologic uncertainty. A site may be judged acceptable if the number of failures to meet a particular performance criterion produced by these computations is sufficiently low. A site that produces too many failures may be either unacceptable or simply inadequately described. The simulation approach to addressing geologic uncertainty is being applied to the potential high-level nuclear waste repository site at Yucca Mountain, Nevada, U.S.A. Preliminary geologic models of unsaturated permeability have been created that reproduce observed statistical properties reasonably well. A spread of unsaturated groundwater travel times has been computed that reflects the variability of those geologic models. Regions within the simulated models exhibiting the greatest variability among multiple runs are candidates for obtaining the greatest reduction in uncertainty through additional site characterization.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...
Observational uncertainty and regional climate model evaluation: A pan-European perspective
NASA Astrophysics Data System (ADS)
Kotlarski, Sven; Szabó, Péter; Herrera, Sixto; Räty, Olle; Keuler, Klaus; Soares, Pedro M.; Cardoso, Rita M.; Bosshard, Thomas; Pagé, Christian; Boberg, Fredrik; Gutiérrez, José M.; Jaczewski, Adam; Kreienkamp, Frank; Liniger, Mark. A.; Lussana, Cristian; Szepszo, Gabriella
2017-04-01
Local and regional climate change assessments based on downscaling methods crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling via regional climate models (RCMs) observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. Focusing on the evaluation of RCMs, we here analyze the influence of uncertainties in observational reference data on evaluation results in a well-defined performance assessment framework and on a European scale. For this purpose we employ three different gridded observational reference grids, namely (1) the well-established EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. In terms of climate models five reanalysis-driven experiments carried out by five different RCMs within the EURO-CORDEX framework are used. Two variables (temperature and precipitation) and a range of evaluation metrics that reflect different aspects of RCM performance are considered. We furthermore include an illustrative model ranking exercise and relate observational spread to RCM spread. The results obtained indicate a varying influence of observational uncertainty on model evaluation depending on the variable, the season, the region and the specific performance metric considered. Over most parts of the continent, the influence of the choice of the reference dataset for temperature is rather small for seasonal mean values and inter-annual variability. Here, model uncertainty (as measured by the spread between the five RCM simulations considered) is typically much larger than reference data uncertainty. For parameters of the daily temperature distribution and for the spatial pattern correlation, however, important dependencies on the reference dataset can arise. The related evaluation uncertainties can be as large or even larger than model uncertainty. For precipitation the influence of observational uncertainty is, in general, larger than for temperature. It often dominates model uncertainty especially for the evaluation of the wet day frequency, the spatial correlation and the shape and location of the distribution of daily values. But even the evaluation of large-scale seasonal mean values can be considerably affected by the choice of the reference. When employing a simple and illustrative model ranking scheme on these results it is found that RCM ranking in many cases depends on the reference dataset employed.
NASA Astrophysics Data System (ADS)
Garry, Freya; McDonagh, Elaine; Blaker, Adam; Roberts, Chris; Desbruyères, Damien; King, Brian
2017-04-01
Estimates of heat content change in the deep oceans (below 2000 m) over the last thirty years are obtained from temperature measurements made by hydrographic survey ships. Cruises occupy the same tracks across an ocean basin approximately every 5+ years. Measurements may not be sufficiently frequent in time or space to allow accurate evaluation of total ocean heat content (OHC) and its rate of change. It is widely thought that additional deep ocean sampling will also aid understanding of the mechanisms for OHC change on annual to decadal timescales, including how OHC varies regionally under natural and anthropogenically forced climate change. Here a 0.25˚ ocean model is used to investigate the magnitude of uncertainties and biases that exist in estimates of deep ocean temperature change from hydrographic sections due to their infrequent timing and sparse spatial distribution during 1990 - 2010. Biases in the observational data may be due to lack of spatial coverage (not enough sections covering the basin), lack of data between occupations (typically 5-10 years apart) and due to occupations not closely spanning the time period of interest. Between 1990 - 2010, the modelled biases globally are comparatively small in the abyssal ocean below 3500 m although regionally certain biases in heat flux into the 4000 - 6000 m layer can be up to 0.05 Wm-2. Biases in the heat flux into the deep 2000 - 4000 m layer due to either temporal or spatial sampling uncertainties are typically much larger and can be over 0.1 Wm-2 across an ocean. Overall, 82% of the warming trend below 2000 m is captured by observational-style sampling in the model. However, at 2500 m (too deep for additional temperature information to be inferred from upper ocean Argo) less than two thirds of the magnitude of the global warming trend is obtained, and regionally large biases exist in the Atlantic, Southern and Indian Oceans, highlighting the need for widespread improved deep ocean temperature sampling. In addition to bias due to infrequent sampling, moving the timings of occupations by a few months generates relatively large uncertainty due to intra-annual variability in deep ocean model temperature, further strengthening the case for high temporal frequency observations in the deep ocean (as could be achieved using deep ocean autonomous float technologies). Biases due to different uncertainties can have opposing signs and differ in relative importance both regionally and with depth revealing the importance of reducing all uncertainties (both spatial and temporal) simultaneously in future deep ocean observing design.
An all digital low data rate communication system
NASA Technical Reports Server (NTRS)
Chen, C.; Fan, M.
1973-01-01
The advent of digital hardwares has made it feasible to implement many communication system components digitally. With the exception of frequency down conversion, the proposed low data rate communication system uses digital hardwares completely. Although the system is designed primarily for deep space communications with large frequency uncertainty and low signal-to-noise ratio, it is also suitable for other low data rate applications with time-shared operation among a number of channels. Emphasis is placed on the fast Fourier transform receiver and the automatic frequency control via digital filtering. The speed available from the digital system allows sophisticated signal processing to reduce frequency uncertainty and to increase the signal-to-noise ratio. The practical limitations of the system such as the finite register length are examined. It is concluded that the proposed all-digital system is not only technically feasible but also has potential cost reduction over the existing receiving systems.
2014 Summer Series - Rusty Schweickart - Dinosaur Syndrome Avoidance Project: How Gozit?
2014-07-17
The 2013 Chelyabinsk meteor demonstrated that grave uncertainties exist pertaining to near-Earth objects (NEOs). Although the impact rate for dangerous asteroids is relatively low, the consequences of such an event are severe. Apollo Astronaut Rusty Schweickart, will talk about our prospects of avoiding the same fate as the dinosaurs. He will review the status of the global efforts to protect life on the planet from the devastation of large asteroid impacts. He will also discuss both the technical and geopolitical components of the challenge of preventing future asteroid impacts.
Approach to ignition of tokamak reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigmar, D.J.
1981-02-01
Recent transport modeling results for JET, INTOR, and ETF are reviewed and analyzed with respect to existing uncertainties in the underlying physics, the self-consistency of the very large numerical codes, and the margin for ignition. The codes show ignition to occur in ETF/INTOR-sized machines if empirical scaling can be extrapolated to ion temperatures (and beta values) much higher than those presently achieved, if there is no significant impurity accumulation over the first 7 s, and if the known ideal and resistive MHD instabilities remain controllable for the evolving plasma profiles during ignition startup.
Quantification of uncertainties in global grazing systems assessment
NASA Astrophysics Data System (ADS)
Fetzel, T.; Havlik, P.; Herrero, M.; Kaplan, J. O.; Kastner, T.; Kroisleitner, C.; Rolinski, S.; Searchinger, T.; Van Bodegom, P. M.; Wirsenius, S.; Erb, K.-H.
2017-07-01
Livestock systems play a key role in global sustainability challenges like food security and climate change, yet many unknowns and large uncertainties prevail. We present a systematic, spatially explicit assessment of uncertainties related to grazing intensity (GI), a key metric for assessing ecological impacts of grazing, by combining existing data sets on (a) grazing feed intake, (b) the spatial distribution of livestock, (c) the extent of grazing land, and (d) its net primary productivity (NPP). An analysis of the resulting 96 maps implies that on average 15% of the grazing land NPP is consumed by livestock. GI is low in most of the world's grazing lands, but hotspots of very high GI prevail in 1% of the total grazing area. The agreement between GI maps is good on one fifth of the world's grazing area, while on the remainder, it is low to very low. Largest uncertainties are found in global drylands and where grazing land bears trees (e.g., the Amazon basin or the Taiga belt). In some regions like India or Western Europe, massive uncertainties even result in GI > 100% estimates. Our sensitivity analysis indicates that the input data for NPP, animal distribution, and grazing area contribute about equally to the total variability in GI maps, while grazing feed intake is a less critical variable. We argue that a general improvement in quality of the available global level data sets is a precondition for improving the understanding of the role of livestock systems in the context of global environmental change or food security.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
NASA Technical Reports Server (NTRS)
Liu, Jianbo; Kummerow, Christian D.; Elsaesser, Gregory S.
2016-01-01
Despite continuous improvements in microwave sensors and retrieval algorithms, our understanding of precipitation uncertainty is quite limited, due primarily to inconsistent findings in studies that compare satellite estimates to in situ observations over different parts of the world. This study seeks to characterize the temporal and spatial properties of uncertainty in the Tropical Rainfall Measuring Mission Microwave Imager surface rainfall product over tropical ocean basins. Two uncertainty analysis frameworks are introduced to qualitatively evaluate the properties of uncertainty under a hierarchy of spatiotemporal data resolutions. The first framework (i.e. 'climate method') demonstrates that, apart from random errors and regionally dependent biases, a large component of the overall precipitation uncertainty is manifested in cyclical patterns that are closely related to large-scale atmospheric modes of variability. By estimating the magnitudes of major uncertainty sources independently, the climate method is able to explain 45-88% of the monthly uncertainty variability. The percentage is largely resolution dependent (with the lowest percentage explained associated with a 1 deg x 1 deg spatial/1 month temporal resolution, and highest associated with a 3 deg x 3 deg spatial/3 month temporal resolution). The second framework (i.e. 'weather method') explains regional mean precipitation uncertainty as a summation of uncertainties associated with individual precipitation systems. By further assuming that self-similar recurring precipitation systems yield qualitatively comparable precipitation uncertainties, the weather method can consistently resolve about 50 % of the daily uncertainty variability, with only limited dependence on the regions of interest.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
NASA Astrophysics Data System (ADS)
Anderson, C. J.; Wildhaber, M. L.; Wikle, C. K.; Moran, E. H.; Franz, K. J.; Dey, R.
2012-12-01
Climate change operates over a broad range of spatial and temporal scales. Understanding the effects of change on ecosystems requires accounting for the propagation of information and uncertainty across these scales. For example, to understand potential climate change effects on fish populations in riverine ecosystems, climate conditions predicted by course-resolution atmosphere-ocean global climate models must first be translated to the regional climate scale. In turn, this regional information is used to force watershed models, which are used to force river condition models, which impact the population response. A critical challenge in such a multiscale modeling environment is to quantify sources of uncertainty given the highly nonlinear nature of interactions between climate variables and the individual organism. We use a hierarchical modeling approach for accommodating uncertainty in multiscale ecological impact studies. This framework allows for uncertainty due to system models, model parameter settings, and stochastic parameterizations. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. We use NARCCAP data to determine confidence the capability of climate models to simulate relevant processes and to quantify regional climate variability within the context of the hierarchical model of uncertainty quantification. By confidence, we mean the ability of the regional climate model to replicate observed mechanisms. We use the NCEP-driven simulations for this analysis. This provides a base from which regional change can be categorized as either a modification of previously observed mechanisms or emergence of new processes. The management implications for these categories of change are significantly different in that procedures to address impacts from existing processes may already be known and need adjustment; whereas, an emergent processes may require new management strategies. The results from hierarchical analysis of uncertainty are used to study the relative change in weights of the endangered Missouri River pallid sturgeon (Scaphirhynchus albus) under a 21st century climate scenario.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
Bayesian analysis for erosion modelling of sediments in combined sewer systems.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Previous research has confirmed that the sediments at the bed of combined sewer systems are the main source of particulate and organic pollution during rain events contributing to combined sewer overflows. However, existing urban stormwater models utilize inappropriate sediment transport formulas initially developed from alluvial hydrodynamics. Recently, a model has been formulated and profoundly assessed based on laboratory experiments to simulate the erosion of sediments in sewer pipes taking into account the increase in strength with depth in the weak layer of deposits. In order to objectively evaluate this model, this paper presents a Bayesian analysis of the model using field data collected in sewer pipes in Paris under known hydraulic conditions. The test has been performed using a MCMC sampling method for calibration and uncertainty assessment. Results demonstrate the capacity of the model to reproduce erosion as a direct response to the increase in bed shear stress. This is due to the model description of the erosional strength in the deposits and to the shape of the measured bed shear stress. However, large uncertainties in some of the model parameters suggest that the model could be over-parameterised and necessitates a large amount of informative data for its calibration.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
NASA Astrophysics Data System (ADS)
Raza, Syed Ali; Zaighum, Isma; Shah, Nida
2018-02-01
This paper examines the relationship between economic policy uncertainty and equity premium in G7 countries over a period of the monthly data from January 1989 to December 2015 using a novel technique namely QQ regression proposed by Sim and Zhou (2015). Based on QQ approach, we estimate how the quantiles of the economic policy uncertainty affect the quantiles of the equity premium. Thus, it provides a comprehensive insight into the overall dependence structure between the equity premium and economic policy uncertainty as compared to traditional techniques like OLS or quantile regression. Overall, our empirical evidence suggests the existence of a negative association between equity premium and EPU predominately in all G7 countries, especially in the extreme low and extreme high tails. However, differences exist among countries and across different quantiles of EPU and the equity premium within each country. The existence of this heterogeneity among countries is due to the differences in terms of dependency on economic policy, other stock markets, and the linkages with other country's equity market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Miao; Wang, Guiling; Chen, Haishan
Assessing and quantifying the uncertainties in projected future changes of energy and water budgets over land surface are important steps toward improving our confidence in climate change projections. In our study, the contribution of land surface models to the inter-GCM variation of projected future changes in land surface energy and water fluxes are assessed based on output from 19 global climate models (GCMs) and offline Community Land Model version 4 (CLM4) simulations driven by meteorological forcing from the 19 GCMs. Similar offline simulations using CLM4 with its dynamic vegetation submodel are also conducted to investigate how dynamic vegetation feedback, amore » process that is being added to more earth system models, may amplify or moderate the intermodel variations of projected future changes. Projected changes are quantified as the difference between the 2081–2100 period from the Representative Concentration Pathway 8.5 (RCP8.5) future experiment and the 1981–2000 period from the historical simulation. Under RCP8.5, projected changes in surface water and heat fluxes show a high degree of model dependency across the globe. Although precipitation is very likely to increase in the high latitudes of the Northern Hemisphere, a high degree of model-related uncertainty exists for evapotranspiration, soil water content, and surface runoff, suggesting discrepancy among land surface models (LSMs) in simulating the surface hydrological processes and snow-related processes. Large model-related uncertainties for the surface water budget also exist in the Tropics including southeastern South America and Central Africa. Moreover, these uncertainties would be reduced in the hypothetical scenario of a single near-perfect land surface model being used across all GCMs, suggesting the potential to reduce uncertainties through the use of more consistent approaches toward land surface model development. Under such a scenario, the most significant reduction is likely to be seen in the Northern Hemisphere high latitudes. Including representation of vegetation dynamics is expected to further amplify the model-related uncertainties in projected future changes in surface water and heat fluxes as well as soil moisture content. This is especially the case in the high latitudes of the Northern Hemisphere (e.g., northwestern North America and central North Asia) where the projected vegetation changes are uncertain and in the Tropics (e.g., the Amazon and Congo Basins) where dense vegetation exists. Finally, findings from this study highlight the importance of improving land surface model parameterizations related to soil and snow processes, as well as the importance of improving the accuracy of dynamic vegetation models.« less
Yu, Miao; Wang, Guiling; Chen, Haishan
2016-03-01
Assessing and quantifying the uncertainties in projected future changes of energy and water budgets over land surface are important steps toward improving our confidence in climate change projections. In our study, the contribution of land surface models to the inter-GCM variation of projected future changes in land surface energy and water fluxes are assessed based on output from 19 global climate models (GCMs) and offline Community Land Model version 4 (CLM4) simulations driven by meteorological forcing from the 19 GCMs. Similar offline simulations using CLM4 with its dynamic vegetation submodel are also conducted to investigate how dynamic vegetation feedback, amore » process that is being added to more earth system models, may amplify or moderate the intermodel variations of projected future changes. Projected changes are quantified as the difference between the 2081–2100 period from the Representative Concentration Pathway 8.5 (RCP8.5) future experiment and the 1981–2000 period from the historical simulation. Under RCP8.5, projected changes in surface water and heat fluxes show a high degree of model dependency across the globe. Although precipitation is very likely to increase in the high latitudes of the Northern Hemisphere, a high degree of model-related uncertainty exists for evapotranspiration, soil water content, and surface runoff, suggesting discrepancy among land surface models (LSMs) in simulating the surface hydrological processes and snow-related processes. Large model-related uncertainties for the surface water budget also exist in the Tropics including southeastern South America and Central Africa. Moreover, these uncertainties would be reduced in the hypothetical scenario of a single near-perfect land surface model being used across all GCMs, suggesting the potential to reduce uncertainties through the use of more consistent approaches toward land surface model development. Under such a scenario, the most significant reduction is likely to be seen in the Northern Hemisphere high latitudes. Including representation of vegetation dynamics is expected to further amplify the model-related uncertainties in projected future changes in surface water and heat fluxes as well as soil moisture content. This is especially the case in the high latitudes of the Northern Hemisphere (e.g., northwestern North America and central North Asia) where the projected vegetation changes are uncertain and in the Tropics (e.g., the Amazon and Congo Basins) where dense vegetation exists. Finally, findings from this study highlight the importance of improving land surface model parameterizations related to soil and snow processes, as well as the importance of improving the accuracy of dynamic vegetation models.« less
2011-01-01
Background Historic carbon emissions are an important foundation for proposed efforts to Reduce Emissions from Deforestation and forest Degradation and enhance forest carbon stocks through conservation and sustainable forest management (REDD+). The level of uncertainty in historic carbon emissions estimates is also critical for REDD+, since high uncertainties could limit climate benefits from credited mitigation actions. Here, we analyzed source data uncertainties based on the range of available deforestation, forest degradation, and forest carbon stock estimates for the Brazilian state of Mato Grosso during 1990-2008. Results Deforestation estimates showed good agreement for multi-year periods of increasing and decreasing deforestation during the study period. However, annual deforestation rates differed by > 20% in more than half of the years between 1997-2008, even for products based on similar input data. Tier 2 estimates of average forest carbon stocks varied between 99-192 Mg C ha-1, with greatest differences in northwest Mato Grosso. Carbon stocks in deforested areas increased over the study period, yet this increasing trend in deforested biomass was smaller than the difference among carbon stock datasets for these areas. Conclusions Estimates of source data uncertainties are essential for REDD+. Patterns of spatial and temporal disagreement among available data products provide a roadmap for future efforts to reduce source data uncertainties for estimates of historic forest carbon emissions. Specifically, regions with large discrepancies in available estimates of both deforestation and forest carbon stocks are priority areas for evaluating and improving existing estimates. Full carbon accounting for REDD+ will also require filling data gaps, including forest degradation and secondary forest, with annual data on all forest transitions. PMID:22208947
Research of Uncertainty Reasoning in Pineapple Disease Identification System
NASA Astrophysics Data System (ADS)
Liu, Liqun; Fan, Haifeng
In order to deal with the uncertainty of evidences mostly existing in pineapple disease identification system, a reasoning model based on evidence credibility factor was established. The uncertainty reasoning method is discussed,including: uncertain representation of knowledge, uncertain representation of rules, uncertain representation of multi-evidences and update of reasoning rules. The reasoning can fully reflect the uncertainty in disease identification and reduce the influence of subjective factors on the accuracy of the system.
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
Koornneef, Joris; Spruijt, Mark; Molag, Menso; Ramírez, Andrea; Turkenburg, Wim; Faaij, André
2010-05-15
A systematic assessment, based on an extensive literature review, of the impact of gaps and uncertainties on the results of quantitative risk assessments (QRAs) for CO(2) pipelines is presented. Sources of uncertainties that have been assessed are: failure rates, pipeline pressure, temperature, section length, diameter, orifice size, type and direction of release, meteorological conditions, jet diameter, vapour mass fraction in the release and the dose-effect relationship for CO(2). A sensitivity analysis with these parameters is performed using release, dispersion and impact models. The results show that the knowledge gaps and uncertainties have a large effect on the accuracy of the assessed risks of CO(2) pipelines. In this study it is found that the individual risk contour can vary between 0 and 204 m from the pipeline depending on assumptions made. In existing studies this range is found to be between <1m and 7.2 km. Mitigating the relevant risks is part of current practice, making them controllable. It is concluded that QRA for CO(2) pipelines can be improved by validation of release and dispersion models for high-pressure CO(2) releases, definition and adoption of a universal dose-effect relationship and development of a good practice guide for QRAs for CO(2) pipelines. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Hallifax, D; Houston, J B
2009-03-01
Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.
The costs of future polio risk management policies.
Tebbens, Radboud J Duintjer; Sangrujee, Nalinee; Thompson, Kimberly M
2006-12-01
Decisionmakers need information about the anticipated future costs of maintaining polio eradication as a function of the policy options under consideration. Given the large portfolio of options, we reviewed and synthesized the existing cost data relevant to current policies to provide context for future policies. We model the expected future costs of different strategies for continued vaccination, surveillance, and other costs that require significant potential resource commitments. We estimate the costs of different potential policy portfolios for low-, middle-, and high-income countries to demonstrate the variability in these costs. We estimate that a global transition from routine immunization with oral poliovirus vaccine (OPV) to inactivated poliovirus vaccine (IPV) would increase the costs of managing polio globally, although routine IPV use remains less costly than routine OPV use with supplemental immunization activities. The costs of surveillance and a stockpile, while small compared to routine vaccination costs, represent important expenditures to ensure adequate response to potential outbreaks. The uncertainty and sensitivity analyses highlight important uncertainty in the aggregated costs and demonstrates that the discount rate and uncertainty in price and administration cost of IPV drives the expected incremental cost of routine IPV vs. OPV immunization.
The violent environment of the origin of life - Progress and uncertainties
NASA Technical Reports Server (NTRS)
Chyba, Christopher F.
1993-01-01
Dating of terrestrial fossils and returned lunar samples reveals that the origin of life on Earth occurred not in a quiescent, peaceful environment, but rather in a violent, impact-ridden one. This realization has important consequences. On the one hand, sufficiently large and fast impactors can erode planetary atmospheres, and the very largest of these may have sterilized the surface of the Earth. In this regard, deep-sea hydrothermal vents become especially interesting for the history of early life, as they provide an environment protected against all but the greatest impact devastation. At the same time, impactors would have been delivering key biogenic elements (such as carbon and nitrogen) to Earth's surface, and (with much greater difficulty) intact organic molecules as well. Estimates of the various sources of prebiotic organics suggest that the heavy bombardment either produced or delivered quantities of organics comparable to those produced by other energy sources. However, substantial uncertainties exist. After reviewing the current understanding of the role of the heavy bombardment in the origins of life, a number of remaining key uncertainties are considered, and attempts are made to both quantify their magnitude and point to means of resolving them.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
Queries over Unstructured Data: Probabilistic Methods to the Rescue
NASA Astrophysics Data System (ADS)
Sarawagi, Sunita
Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
How multiple causes combine: independence constraints on causal inference.
Liljeholm, Mimi
2015-01-01
According to the causal power view, two core constraints-that causes occur independently (i.e., no confounding) and influence their effects independently-serve as boundary conditions for causal induction. This study investigated how violations of these constraints modulate uncertainty about the existence and strength of a causal relationship. Participants were presented with pairs of candidate causes that were either confounded or not, and that either interacted or exerted their influences independently. Consistent with the causal power view, uncertainty about the existence and strength of causal relationships was greater when causes were confounded or interacted than when unconfounded and acting independently. An elemental Bayesian causal model captured differences in uncertainty due to confounding but not those due to an interaction. Implications of distinct sources of uncertainty for the selection of contingency information and causal generalization are discussed.
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.; ...
2017-05-31
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
NASA Astrophysics Data System (ADS)
Goldenson, Naomi L.
Uncertainties in climate projections at the regional scale are inevitably larger than those for global mean quantities. Here, focusing on western North American regional climate, several approaches are taken to quantifying uncertainties starting with the output of global climate model projections. Internal variance is found to be an important component of the projection uncertainty up and down the west coast. To quantify internal variance and other projection uncertainties in existing climate models, we evaluate different ensemble configurations. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find internal variability can be quantified consistently using a large ensemble or an ensemble of opportunity that includes small ensembles from multiple models and climate scenarios. The latter offers the advantage of also producing estimates of uncertainty due to model differences. We conclude that climate projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible. We then conduct a small single-model ensemble of simulations using the Model for Prediction Across Scales with physics from the Community Atmosphere Model Version 5 (MPAS-CAM5) and prescribed historical sea surface temperatures. In the global variable resolution domain, the finest resolution (at 30 km) is in our region of interest over western North America and upwind over the northeast Pacific. In the finer-scale region, extreme precipitation from atmospheric rivers (ARs) is connected to tendencies in seasonal snowpack in mountains of the Northwest United States and California. In most of the Cascade Mountains, winters with more AR days are associated with less snowpack, in contrast to the northern Rockies and California's Sierra Nevadas. In snowpack observations and reanalysis of the atmospheric circulation, we find similar relationships between frequency of AR events and winter season snowpack in the western United States. In spring, however, there is not a clear relationship between number of AR days and seasonal mean snowpack across the model ensemble, so caution is urged in interpreting the historical record in the spring season. Finally, the representation of the El Nino Southern Oscillation (ENSO)--an important source of interannual climate predictability in some regions--is explored in a large single-model ensemble using ensemble Empirical Orthogonal Functions (EOFs) to find modes of variance across the entire ensemble at once. The leading EOF is ENSO. The principal components (PCs) of the next three EOFs exhibit a lead-lag relationship with the ENSO signal captured in the first PC. The second PC, with most of its variance in the summer season, is the most strongly cross-correlated with the first. This approach offers insight into how the model considered represents this important atmosphere-ocean interaction. Taken together these varied approaches quantify the implications of climate projections regionally, identify processes that make snowpack water resources vulnerable, and seek insight into how to better simulate the large-scale climate modes controlling regional variability.
Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.
2013-08-01
Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.
NASA Astrophysics Data System (ADS)
Ming, Fei; Wang, Dong; Shi, Wei-Nan; Huang, Ai-Jun; Sun, Wen-Yang; Ye, Liu
2018-04-01
The uncertainty principle is recognized as an elementary ingredient of quantum theory and sets up a significant bound to predict outcome of measurement for a couple of incompatible observables. In this work, we develop dynamical features of quantum memory-assisted entropic uncertainty relations (QMA-EUR) in a two-qubit Heisenberg XXZ spin chain with an inhomogeneous magnetic field. We specifically derive the dynamical evolutions of the entropic uncertainty with respect to the measurement in the Heisenberg XXZ model when spin A is initially correlated with quantum memory B. It has been found that the larger coupling strength J of the ferromagnetism ( J < 0 ) and the anti-ferromagnetism ( J > 0 ) chains can effectively degrade the measuring uncertainty. Besides, it turns out that the higher temperature can induce the inflation of the uncertainty because the thermal entanglement becomes relatively weak in this scenario, and there exists a distinct dynamical behavior of the uncertainty when an inhomogeneous magnetic field emerges. With the growing magnetic field | B | , the variation of the entropic uncertainty will be non-monotonic. Meanwhile, we compare several different optimized bounds existing with the initial bound proposed by Berta et al. and consequently conclude Adabi et al.'s result is optimal. Moreover, we also investigate the mixedness of the system of interest, dramatically associated with the uncertainty. Remarkably, we put forward a possible physical interpretation to explain the evolutionary phenomenon of the uncertainty. Finally, we take advantage of a local filtering operation to steer the magnitude of the uncertainty. Therefore, our explorations may shed light on the entropic uncertainty under the Heisenberg XXZ model and hence be of importance to quantum precision measurement over solid state-based quantum information processing.
Nuttens, V E; Nahum, A E; Lucas, S
2011-01-01
Urethral NTCP has been determined for three prostates implanted with seeds based on (125)I (145 Gy), (103)Pd (125 Gy), (131)Cs (115 Gy), (103)Pd-(125)I (145 Gy), or (103)Pd-(131)Cs (115 Gy or 130 Gy). First, DU(20), meaning that 20% of the urhral volume receive a dose of at least DU(20), is converted into an I-125 LDR equivalent DU(20) in order to use the urethral NTCP model. Second, the propagation of uncertainties through the steps in the NTCP calculation was assessed in order to identify the parameters responsible for large data uncertainties. Two sets of radiobiological parameters were studied. The NTCP results all fall in the 19%-23% range and are associated with large uncertainties, making the comparison difficult. Depending on the dataset chosen, the ranking of NTCP values among the six seed implants studied changes. Moreover, the large uncertainties on the fitting parameters of the urethral NTCP model result in large uncertainty on the NTCP value. In conclusion, the use of NTCP model for permanent brachytherapy is feasible but it is essential that the uncertainties on the parameters in the model be reduced.
NASA Astrophysics Data System (ADS)
Siade, A. J.; Suckow, A. O.; Morris, R.; Raiber, M.; Prommer, H.
2017-12-01
The calibration of regional groundwater flow models, including those investigating coal-seam gas (CSG) impacts in the Surat Basin, Australia, are not typically constrained using environmental tracers, although the use of such data can potentially provide significant reductions in predictive uncertainties. These additional sources of information can also improve the conceptualisation of flow systems and the quantification of groundwater fluxes. In this study, new multi-tracer data (14C, 39Ar, 81Kr, and 36Cl) were collected for the eastern recharge areas of the basin and within the deeper Hutton and Precipice Sandstone formations to complement existing environmental tracer data. These data were used to better understand the recharge mechanisms, recharge rates and the hydraulic properties associated with deep aquifer systems in the Surat Basin. Together with newly acquired pressure data documenting the response to the large-scale reinjection of highly treated CSG co-produced water, the environmental tracer data helped to improve the conceptualisation of the aquifer system, forming the basis for a more robust quantification of the long-term impacts of CSG-related activities. An existing regional scale MODFLOW-USG groundwater flow model of the area was used as the basis for our analysis of existing and new observation data. A variety of surrogate modelling approaches were used to develop simplified models that focussed on the flow and transport behaviour of the deep aquifer systems. These surrogate models were able to represent sub-system behaviour in terms of flow, multi-environmental tracer transport and the observed large-scale hydrogeochemical patterns. The incorporation of the environmental tracer data into the modelling framework provide an improved understanding of the flow regimes of the deeper aquifer systems as well as valuable information on how to reduce uncertainties in hydraulic properties where there is little or no historical observations of hydraulic heads.
Designing optimal greenhouse gas monitoring networks for Australia
NASA Astrophysics Data System (ADS)
Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.
2016-01-01
Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.
NASA Technical Reports Server (NTRS)
Morton, Douglas C.; Sales, Marcio H.; Souza, Carlos M., Jr.; Griscom, Bronson
2011-01-01
Historic carbon emissions are an important foundation for proposed efforts to Reduce Emissions from Deforestation and forest Degradation and enhance forest carbon stocks through conservation and sustainable forest management (REDD+). The level of uncertainty in historic carbon emissions estimates is also critical for REDD+, since high uncertainties could limit climate benefits from mitigation actions. Here, we analyzed source data uncertainties based on the range of available deforestation, forest degradation, and forest carbon stock estimates for the Brazilian state of Mato Grosso during 1990-2008. Results: Deforestation estimates showed good agreement for multi-year trends of increasing and decreasing deforestation during the study period. However, annual deforestation rates differed by >20% in more than half of the years between 1997-2008, even for products based on similar input data. Tier 2 estimates of average forest carbon stocks varied between 99-192 Mg C/ha, with greatest differences in northwest Mato Grosso. Carbon stocks in deforested areas increased over the study period, yet this increasing trend in deforested biomass was smaller than the difference among carbon stock datasets for these areas. Conclusions: Patterns of spatial and temporal disagreement among available data products provide a roadmap for future efforts to reduce source data uncertainties for estimates of historic forest carbon emissions. Specifically, regions with large discrepancies in available estimates of both deforestation and forest carbon stocks are priority areas for evaluating and improving existing estimates. Full carbon accounting for REDD+ will also require filling data gaps, including forest degradation and secondary forest, with annual data on all forest transitions.
Li, Y P; Huang, G H
2010-09-15
Considerable public concerns have been raised in the past decades since a large amount of pollutant emissions from municipal solid waste (MSW) disposal of processes pose risks on surrounding environment and human health. Moreover, in MSW management, various uncertainties exist in the related costs, impact factors and objectives, which can affect the optimization processes and the decision schemes generated. In this study, an interval-based possibilistic programming (IBPP) method is developed for planning the MSW management with minimized system cost and environmental impact under uncertainty. The developed method can deal with uncertainties expressed as interval values and fuzzy sets in the left- and right-hand sides of constraints and objective function. An interactive algorithm is provided for solving the IBPP problem, which does not lead to more complicated intermediate submodels and has a relatively low computational requirement. The developed model is applied to a case study of planning a MSW management system, where mixed integer linear programming (MILP) technique is introduced into the IBPP framework to facilitate dynamic analysis for decisions of timing, sizing and siting in terms of capacity expansion for waste-management facilities. Three cases based on different waste-management policies are examined. The results obtained indicate that inclusion of environmental impacts in the optimization model can change the traditional waste-allocation pattern merely based on the economic-oriented planning approach. The results obtained can help identify desired alternatives for managing MSW, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty. Copyright 2010 Elsevier B.V. All rights reserved.
The thermal structure of Saturn: Inferences from ground-based and airborne infrared observations
NASA Technical Reports Server (NTRS)
Tokunaga, A.
1978-01-01
Spectroscopic and photometric infrared observations of Saturn are reviewed and compared to the expected flux from thermal structure models. Large uncertainties exist in the far-infrared measurements, but the available data indicate that the effective temperature of the disk of Saturn is 90 + or - 5 K. The thermal structure models proposed by Tokunaga and Cess and by Gautier et al. (model 'N') agree best with the observations. North-South limb scans of Saturn at 10 and 20 micrometers show that the temperature inversion is much stronger at the South polar region than at the equator.
Tether Impact Rate Simulation and Prediction with Orbiting Satellites
NASA Technical Reports Server (NTRS)
Harrison, Jim
2002-01-01
Space elevators and other large space structures have been studied and proposed as worthwhile by futuristic space planners for at least a couple of decades. In June 1999 the Marshall Space Flight Center sponsored a Space Elevator workshop in Huntsville, Alabama, to bring together technical experts and advanced planners to discuss the current status and to define the magnitude of the technical and programmatic problems connected with the development of these massive space systems. One obvious problem that was identified, although not for the first time, were the collision probabilities between space elevators and orbital debris. Debate and uncertainty presently exist about the extent of the threat to these large structures, one in this study as large in size as a space elevator. We have tentatively concluded that orbital debris although a major concern not sufficient justification to curtail the study and development of futuristic new millennium concepts like the space elevators.
Gaussian processes for personalized e-health monitoring with wearable sensors.
Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel
2013-01-01
Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.
A framework for modeling uncertainty in regional climate change
In this study, we present a new modeling framework and a large ensemble of climate projections to investigate the uncertainty in regional climate change over the United States associated with four dimensions of uncertainty. The sources of uncertainty considered in this framework ...
Model Uncertainty Quantification Methods In Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.
2017-12-01
Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.
NetCDF-U - Uncertainty conventions for netCDF datasets
NASA Astrophysics Data System (ADS)
Bigagli, Lorenzo; Nativi, Stefano; Domenico, Ben
2013-04-01
To facilitate the automated processing of uncertain data (e.g. uncertainty propagation in modeling applications), we have proposed a set of conventions for expressing uncertainty information within the netCDF data model and format: the NetCDF Uncertainty Conventions (NetCDF-U). From a theoretical perspective, it can be said that no dataset is a perfect representation of the reality it purports to represent. Inevitably, errors arise from the observation process, including the sensor system and subsequent processing, differences in scales of phenomena and the spatial support of the observation mechanism, lack of knowledge about the detailed conversion between the measured quantity and the target variable. This means that, in principle, all data should be treated as uncertain. The most natural representation of an uncertain quantity is in terms of random variables, with a probabilistic approach. However, it must be acknowledged that almost all existing data resources are not treated in this way. Most datasets come simply as a series of values, often without any uncertainty information. If uncertainty information is present, then it is typically within the metadata, as a data quality element. This is typically a global (dataset wide) representation of uncertainty, often derived through some form of validation process. Typically, it is a statistical measure of spread, for example the standard deviation of the residuals. The introduction of a mechanism by which such descriptions of uncertainty can be integrated into existing geospatial applications is considered a practical step towards a more accurate modeling of our uncertain understanding of any natural process. Given the generality and flexibility of the netCDF data model, conventions on naming, syntax, and semantics have been adopted by several communities of practice, as a means of improving data interoperability. Some of the existing conventions include provisions on uncertain elements and concepts, but, to our knowledge, no general convention on the encoding of uncertainty has been proposed, to date. In particular, the netCDF Climate and Forecast Conventions (NetCDF-CF), a de-facto standard for a large amount of data in Fluid Earth Sciences, mention the issue and provide limited support for uncertainty representation. NetCDF-U is designed to be fully compatible with NetCDF-CF, where possible adopting the same mechanisms (e.g. using the same attributes name with compatible semantics). The rationale for this is that a probabilistic description of scientific quantities is a crosscutting aspect, which may be modularized (note that a netCDF dataset may be compliant with more than one convention). The scope of NetCDF-U is to extend and qualify the netCDF classic data model (also known as netCDF3), to capture the uncertainty related to geospatial information encoded in that format. In the future, a netCDF4 approach for uncertainty encoding will be investigated. The NetCDF-U Conventions have the following rationale: • Compatibility with netCDF-CF Conventions 1.5. • Human-readability of conforming datasets structure. • Minimal difference between certain/agnostic and uncertain representations of data (e.g. with respect to dataset structure). NetCDF-U is based on a generic mechanism for annotating netCDF data variables with probability theory semantics. The Uncertainty Markup Language (UncertML) 2.0 is used as a controlled conceptual model and vocabulary for NetCDF-U annotations. The proposed mechanism anticipates a generalized support for semantic annotations in netCDF. NetCDF-U defines syntactical conventions for encoding samples, summary statistics, and distributions, along with mechanisms for expressing dependency relationships among variables. The conventions were accepted as an Open Geospatial Consortium (OGC) Discussion Paper (OGC 11-163); related discussions are conducted on a public forum hosted by the OGC. NetCDF-U may have implications for future work directed at communicating geospatial data provenance and uncertainty in contexts other than netCDF. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
Identifying acne treatment uncertainties via a James Lind Alliance Priority Setting Partnership
Layton, Alison; Eady, E Anne; Peat, Maggie; Whitehouse, Heather; Levell, Nick; Ridd, Matthew; Cowdell, Fiona; Patel, Mahenda; Andrews, Stephen; Oxnard, Christine; Fenton, Mark; Firkins, Lester
2015-01-01
Objectives The Acne Priority Setting Partnership (PSP) was set up to identify and rank treatment uncertainties by bringing together people with acne, and professionals providing care within and beyond the National Health Service (NHS). Setting The UK with international participation. Participants Teenagers and adults with acne, parents, partners, nurses, clinicians, pharmacists, private practitioners. Methods Treatment uncertainties were collected via separate online harvesting surveys, embedded within the PSP website, for patients and professionals. A wide variety of approaches were used to promote the surveys to stakeholder groups with a particular emphasis on teenagers and young adults. Survey submissions were collated using keywords and verified as uncertainties by appraising existing evidence. The 30 most popular themes were ranked via weighted scores from an online vote. At a priority setting workshop, patients and professionals discussed the 18 highest-scoring questions from the vote, and reached consensus on the top 10. Results In the harvesting survey, 2310 people, including 652 professionals and 1456 patients (58% aged 24 y or younger), made submissions containing at least one research question. After checking for relevance and rephrasing, a total of 6255 questions were collated into themes. Valid votes ranking the 30 most common themes were obtained from 2807 participants. The top 10 uncertainties prioritised at the workshop were largely focused on management strategies, optimum use of common prescription medications and the role of non-drug based interventions. More female than male patients took part in the harvesting surveys and vote. A wider range of uncertainties were provided by patients compared to professionals. Conclusions Engaging teenagers and young adults in priority setting is achievable using a variety of promotional methods. The top 10 uncertainties reveal an extensive knowledge gap about widely used interventions and the relative merits of drug versus non-drug based treatments in acne management. PMID:26187120
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; de Bruijn, Karin; Bouwer, Laurens; de Moel, Hans
2015-04-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. This Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. This uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.
2015-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Proton Magnetic Form Factor from Existing Elastic e-p Cross Section Data
NASA Astrophysics Data System (ADS)
Ou, Longwu; Christy, Eric; Gilad, Shalev; Keppel, Cynthia; Schmookler, Barak; Wojtsekhowski, Bogdan
2015-04-01
The proton magnetic form factor GMp, in addition to being an important benchmark for all cross section measurements in hadron physics, provides critical information on proton structure. Extraction of GMp from e-p cross section data is complicated by two-photon exchange (TPE) effects, where available calculations still have large theoretical uncertainties. Studies of TPE contributions to e-p scattering have observed no nonlinear effects in Rosenbluth separations. Recent theoretical investigations show that the TPE correction goes to 0 when ɛ approaches 1, where ɛ is the virtual photon polarization parameter. In this talk, existing e-p elastic cross section data are reanalyzed by extrapolating the reduced cross section for ɛ approaching 1. Existing polarization transfer data, which is supposed to be relatively immune to TPE effects, are used to produce a ratio of electric and magnetic form factors. The extrapolated reduced cross section and polarization transfer ratio are then used to calculate GEp and GMp at different Q2 values.
NASA Astrophysics Data System (ADS)
Lin, G.; Stephan, E.; Elsethagen, T.; Meng, D.; Riihimaki, L. D.; McFarlane, S. A.
2012-12-01
Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in applications. It determines how likely certain outcomes are if some aspects of the system are not exactly known. UQ studies such as the atmosphere datasets greatly increased in size and complexity because they now comprise of additional complex iterative steps, involve numerous simulation runs and can consist of additional analytical products such as charts, reports, and visualizations to explain levels of uncertainty. These new requirements greatly expand the need for metadata support beyond the NetCDF convention and vocabulary and as a result an additional formal data provenance ontology is required to provide a historical explanation of the origin of the dataset that include references between the explanations and components within the dataset. This work shares a climate observation data UQ science use case and illustrates how to reduce climate observation data uncertainty and use a linked science application called Provenance Environment (ProvEn) to enable and facilitate scientific teams to publish, share, link, and discover knowledge about the UQ research results. UQ results include terascale datasets that are published to an Earth Systems Grid Federation (ESGF) repository. Uncertainty exists in observation data sets, which is due to sensor data process (such as time averaging), sensor failure in extreme weather conditions, and sensor manufacture error etc. To reduce the uncertainty in the observation data sets, a method based on Principal Component Analysis (PCA) was proposed to recover the missing values in observation data. Several large principal components (PCs) of data with missing values are computed based on available values using an iterative method. The computed PCs can approximate the true PCs with high accuracy given a condition of missing values is met; the iterative method greatly improve the computational efficiency in computing PCs. Moreover, noise removal is done at the same time during the process of computing missing values by using only several large PCs. The uncertainty quantification is done through statistical analysis of the distribution of different PCs. To record above UQ process, and provide an explanation on the uncertainty before and after the UQ process on the observation data sets, additional data provenance ontology, such as ProvEn, is necessary. In this study, we demonstrate how to reduce observation data uncertainty on climate model-observation test beds and using ProvEn to record the UQ process on ESGF. ProvEn demonstrates how a scientific team conducting UQ studies can discover dataset links using its domain knowledgebase, allowing them to better understand and convey the UQ study research objectives, the experimental protocol used, the resulting dataset lineage, related analytical findings, ancillary literature citations, along with the social network of scientists associated with the study. Climate scientists will not only benefit from understanding a particular dataset within a knowledge context, but also benefit from the cross reference of knowledge among the numerous UQ studies being stored in ESGF.
Time lag and communication in changing unpopular norms.
Gërxhani, Klarita; Bruggeman, Jeroen
2015-01-01
Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm.
Standardization of Broadband UV Measurements for 365 nm LED Sources
Eppeldauer, George P.
2012-01-01
Broadband UV measurements are evaluated when UV-A irradiance meters measure optical radiation from 365 nm UV sources. The CIE standardized rectangular-shape UV-A function can be realized only with large spectral mismatch errors. The spectral power-distribution of the 365 nm excitation source is not standardized. Accordingly, the readings made with different types of UV meters, even if they measure the same UV source, can be very different. Available UV detectors and UV meters were measured and evaluated for spectral responsivity. The spectral product of the source-distribution and the meter’s spectral-responsivity were calculated for different combinations to estimate broad-band signal-measurement errors. Standardization of both the UV source-distribution and the meter spectral-responsivity is recommended here to perform uniform broad-band measurements with low uncertainty. It is shown what spectral responsivity function(s) is needed for new and existing UV irradiance meters to perform low-uncertainty broadband 365 nm measurements. PMID:26900516
NASA Astrophysics Data System (ADS)
Estrada Vigil, Juan Cruz
The mass of the top (t) quark has been measured in the lepton+jets channel of tt¯ final states studied by the DØ and CDF experiments at Fermilab using data from Run I of the Tevatron pp¯ collider. The result published by DØ is 173.3 +/- 5.6(stat) +/- 5.5(syst) GeV. We present a different method to perform this measurement using the existing data. The new technique uses all available kinematic information in an event, and provides a significantly smaller statistical uncertainty than achieved in previous analyses. The preliminary results presented in this thesis indicate a statistical uncertainty for the extracted mass of the top quark of 3.5 GeV, which represents a significant improvement over the previous value of 5.6 GeV. The method of analysis is very general, and may be particularly useful in situations where there is a small signal and a large background.
Value of information of repair times for offshore wind farm maintenance planning
NASA Astrophysics Data System (ADS)
Seyr, Helene; Muskulus, Michael
2016-09-01
A large contribution to the total cost of energy in offshore wind farms is due to maintenance costs. In recent years research has focused therefore on lowering the maintenance costs using different approaches. Decision support models for scheduling the maintenance exist already, dealing with different factors influencing the scheduling. Our contribution deals with the uncertainty in the repair times. Given the mean repair times for different turbine components we make some assumptions regarding the underlying repair time distribution. We compare the results of a decision support model for the mean times to repair and those repair time distributions. Additionally, distributions with the same mean but different variances are compared under the same conditions. The value of lowering the uncertainty in the repair time is calculated and we find that using distributions significantly decreases the availability, when scheduling maintenance for multiple turbines in a wind park. Having detailed information about the repair time distribution may influence the results of maintenance modeling and might help identify cost factors.
Time Lag and Communication in Changing Unpopular Norms
Gërxhani, Klarita; Bruggeman, Jeroen
2015-01-01
Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm. PMID:25880200
Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault
Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.
2004-01-01
Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
NASA Astrophysics Data System (ADS)
Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.
2017-12-01
Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.
Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel
2014-11-01
With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.
COS Views of Local Galaxies Approaching Primeval Conditions
NASA Astrophysics Data System (ADS)
Wofford, Aida
2014-10-01
We will use COS G160M+G185M to observe the cosmollogically important lines C IV 1548+1551 A, He II 1640 A, O III] 1661+1666 A, and C III] 1907+1909 A in the three closest most metal-poor blue compact dwarf galaxies known. These galaxies approach primeval insterstellar and stellar conditions. One of the galaxies has no existing spectroscopic coverage in the UV. Available spectroscopy of the most metal-poor galaxies in the local universe are scarce, inhomogeneous, mostly low spectral-resolution, and are either noisy in main UV lines or lack their coverage. The proposed spectral resolution of about 20 km/s represents an order of magnitude improvement over existing HST data and allows us to disentangle stellar, nebular, and/or shock components to the lines. The high-quality constraints obtained in the framework of this proposal will make it possible to assess the relative likelihood of new spectral models of star-forming galaxies from different groups, in the best possible way achievable with current instrumentation. This will ensure that the best possible studies of early chemical enrichment of the universe can be achieved. The proposed observations are necessary to minimize large existing systematic uncertainties in the determination of high-redshift galaxy properties that JWST was in large part designed to measure.
Narrowing the surface temperature range in CMIP5 simulations over the Arctic
NASA Astrophysics Data System (ADS)
Hao, Mingju; Huang, Jianbin; Luo, Yong; Chen, Xin; Lin, Yanluan; Zhao, Zongci; Xu, Ying
2018-05-01
Much uncertainty exists in reproducing Arctic temperature using different general circulation models (GCMs). Therefore, evaluating the performance of GCMs in reproducing Arctic temperature is critically important. In our study, 32 GCMs in the fifth phase of the Coupled Model Intercomparison Project (CMIP5) during the period 1900-2005 are used, and several metrics, i.e., bias, correlation coefficient ( R), and root mean square error (RMSE), are applied. The Cowtan data set is adopted as the reference data. The results suggest that the GCMs used can reasonably reproduce the Arctic warming trend during the period 1900-2005, as observed in the observational data, whereas a large variation of inter-model differences exists in modeling the Arctic warming magnitude. With respect to the reference data, most GCMs have large cold biases, whereas others have weak warm biases. Additionally, based on statistical thresholds, the models MIROC-ESM, CSIRO-Mk3-6-0, HadGEM2-AO, and MIROC-ESM-CHEM (bias ≤ ±0.10 °C, R ≥ 0.50, and RMSE ≤ 0.60 °C) are identified as well-performing GCMs. The ensemble of the four best-performing GCMs (ES4), with bias, R, and RMSE values of -0.03 °C, 0.72, and 0.39 °C, respectively, performs better than the ensemble with all 32 members, with bias, R, and RMSE values of -0.04 °C, 0.64, and 0.43 °C, respectively. Finally, ES4 is used to produce projections for the next century under the scenarios of RCP2.6, RCP4.5, and RCP8.0. The uncertainty in the projected temperature is greater in the higher emissions scenarios. Additionally, the projected temperature in the cold half year has larger variations than that in the warm half year.
Neutralizer Hollow Cathode Simulations and Comparisons with Ground Test Data
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Snyder, John S.; Goebel, Dan M.; Katz, Ira; Herman, Daniel A.
2009-01-01
The fidelity of electric propulsion physics-based models depends largely on the validity of their predictions over a range of operating conditions and geometries. In general, increased complexity of the physics requires more extensive comparisons with laboratory data to identify the region(s) that lie outside the validity of the model assumptions and to quantify the uncertainties within its range of application. This paper presents numerical simulations of neutralizer hollow cathodes at various operating conditions and orifice sizes. The simulations were performed using a two-dimensional axisymmetric model that solves numerically a relatively extensive system of conservation laws for the partially ionized gas in these devices. A summary of the comparisons between simulation results and Langmuir probe measurements is provided. The model has also been employed to provide insight into recent ground test observations of the neutralizer cathode in NEXT. It is found that a likely cause of the observed keeper voltage drop is cathode orifice erosion. However, due to the small magnitude of this change, is approx. 0.5 V (less than 5% of the beginning-of-life value) over 10 khrs, and in light of the large uncertainties of the cathode material sputtering yield at low ion energies, other causes cannot be excluded. Preliminary simulations to understand transition to plume mode suggest that in the range of 3-5 sccm the existing 2-D model reproduces fairly well the rise of the keeper voltage in the NEXT neutralizer as observed in the laboratory. At lower flow rates the simulation produces oscillations in the keeper current and voltage that require prohibitively small time-steps to resolve with the existing algorithms.
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
Middleton, Richard Stephen; Yaw, Sean Patrick
2018-01-11
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen; Yaw, Sean Patrick
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Uncertainty representation of grey numbers and grey sets.
Yang, Yingjie; Liu, Sifeng; John, Robert
2014-09-01
In the literature, there is a presumption that a grey set and an interval-valued fuzzy set are equivalent. This presumption ignores the existence of discrete components in a grey number. In this paper, new measurements of uncertainties of grey numbers and grey sets, consisting of both absolute and relative uncertainties, are defined to give a comprehensive representation of uncertainties in a grey number and a grey set. Some simple examples are provided to illustrate that the proposed uncertainty measurement can give an effective representation of both absolute and relative uncertainties in a grey number and a grey set. The relationships between grey sets and interval-valued fuzzy sets are also analyzed from the point of view of the proposed uncertainty representation. The analysis demonstrates that grey sets and interval-valued fuzzy sets provide different but overlapping models for uncertainty representation in sets.
Safety Assessment for the Kozloduy National Disposal Facility in Bulgaria - 13507
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biurrun, E.; Haverkamp, B.; Lazaro, A.
2013-07-01
Due to the early decommissioning of four Water-Water Energy Reactors (WWER) 440-V230 reactors at the Nuclear Power Plant (NPP) near the city of Kozloduy in Bulgaria, large amounts of low and intermediate radioactive waste will arise much earlier than initially scheduled. In or-der to manage the radioactive waste from the early decommissioning, Bulgaria has intensified its efforts to provide a near surface disposal facility at Radiana with the required capacity. To this end, a project was launched and assigned in international competition to a German-Spanish consortium to provide the complete technical planning including the preparation of the Intermediate Safety Assessmentmore » Report. Preliminary results of operational and long-term safety show compliance with the Bulgarian regulatory requirements. The long-term calculations carried out for the Radiana site are also a good example of how analysis of safety assessment results can be used for iterative improvements of the assessment by pointing out uncertainties and areas of future investigations to reduce such uncertainties in regard to the potential radiological impact. The computer model used to estimate the long-term evolution of the future repository at Radiana predicted a maximum total annual dose for members of the critical group, which is carried to approximately 80 % by C-14 for a specific ingestion pathway. Based on this result and the outcome of the sensitivity analysis, existing uncertainties were evaluated and areas for reasonable future investigations to reduce these uncertainties were identified. (authors)« less
The Near-Earth Meteoroid Flux, Speed Distribution, and Uncertainty
NASA Technical Reports Server (NTRS)
Moorhead, Althea; Cooke, William J.; Brown, Peter G.; Campbell-Brown, Margaret; Moser, Danielle E.
2016-01-01
Meteoroids are known to pose a threat to spacecraft; they can puncture components, disturb spacecraft attitude, and possibly create secondary electrical effects. Accurate environment models are therefore critical for mitigating meteoroid-related risks. While there are several meteoroid environment models available for assessing spacecraft risk, the uncertainties associated with these models are not well understood. Because meteoroid properties are derived from indirect observations such as meteors and impact craters, the uncertainty in the meteoroid flux is potentially quite large. We combine existing meteoroid flux measurements with new radar and optical meteor data to improve our characterization of the meteoroid flux onto the Earth and its velocity distribution. We use data extracted from the NASA all-sky network, the Canadian Automated Meteor Observatory, and the Canadian Meteor Orbit Radar. We improve our characterization of the observed meteoroid speed distribution by incorporating modern descriptions of the ionization efficiency (e.g., Thomas et al., 2016). We also present estimates of the uncertainties associated with our meteoroid flux distribution. Finally, we discuss the implications for spacecraft. Our model is constrained by the cratering rate on the space-facing surface of LDEF, and thus the risk posed to spacecraft by meteoroid-induced physical damage is the least uncertain component of our model. Other sources of risk, however, may vary. For instance, a lower average meteoroid speed would require a higher meteoroid mass flux in order to match the LDEF crater counts, leading to higher predicted rates of attitude disturbances.
NASA Astrophysics Data System (ADS)
Newcomer, Adam
Increasing demand for electricity and an aging fleet of generators are the principal drivers behind an increasing need for a large amount of capital investments in the US electric power sector in the near term. The decisions (or lack thereof) by firms, regulators and policy makers in response to this challenge have long lasting consequences, incur large economic and environmental risks, and must be made despite large uncertainties about the future operating and business environment. Capital investment decisions are complex: rates of return are not guaranteed; significant uncertainties about future environmental legislation and regulations exist at both the state and national levels---particularly about carbon dioxide emissions; there is an increasing number of shareholder mandates requiring public utilities to reduce their exposure to potentially large losses from stricter environmental regulations; and there are significant concerns about electricity and fuel price levels, supplies, and security. Large scale, low carbon electricity generation facilities using coal, such as integrated gasification combined cycle (IGCC) facilities coupled with carbon capture and sequestration (CCS) technologies, have been technically proven but are unprofitable in the current regulatory and business environment where there is no explicit or implicit price on carbon dioxide emissions. The paper examines two separate scenarios that are actively discussed by policy and decision makers at corporate, state and national levels: a future US electricity system where coal plays a role; and one where the role of coal is limited or nonexistent. The thesis intends to provide guidance for firms and policy makers and outline applications and opportunities for public policies and for private investment decisions to limit financial risks of electricity generation capital investments under carbon constraints.
Quantification and propagation of disciplinary uncertainty via Bayesian statistics
NASA Astrophysics Data System (ADS)
Mantis, George Constantine
2002-08-01
Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single-stage-to-orbit (SSTO) reusable launch vehicle concept, developed by the NASA Langley Research Center under the Space Launch Initiative, provides the validation case for this work, with the focus placed on economics, aerothermodynamics, propulsion, and structures metrics. (Abstract shortened by UMI.)
Unexpectedly large impact of forest management and grazing on global vegetation biomass
NASA Astrophysics Data System (ADS)
Erb, Karl-Heinz; Kastner, Thomas; Plutzar, Christoph; Bais, Anna Liza S.; Carvalhais, Nuno; Fetzel, Tamara; Gingrich, Simone; Haberl, Helmut; Lauk, Christian; Niedertscheider, Maria; Pongratz, Julia; Thurner, Martin; Luyssaert, Sebastiaan
2018-01-01
Carbon stocks in vegetation have a key role in the climate system. However, the magnitude, patterns and uncertainties of carbon stocks and the effect of land use on the stocks remain poorly quantified. Here we show, using state-of-the-art datasets, that vegetation currently stores around 450 petagrams of carbon. In the hypothetical absence of land use, potential vegetation would store around 916 petagrams of carbon, under current climate conditions. This difference highlights the massive effect of land use on biomass stocks. Deforestation and other land-cover changes are responsible for 53-58% of the difference between current and potential biomass stocks. Land management effects (the biomass stock changes induced by land use within the same land cover) contribute 42-47%, but have been underestimated in the literature. Therefore, avoiding deforestation is necessary but not sufficient for mitigation of climate change. Our results imply that trade-offs exist between conserving carbon stocks on managed land and raising the contribution of biomass to raw material and energy supply for the mitigation of climate change. Efforts to raise biomass stocks are currently verifiable only in temperate forests, where their potential is limited. By contrast, large uncertainties hinder verification in the tropical forest, where the largest potential is located, pointing to challenges for the upcoming stocktaking exercises under the Paris agreement.
Unexpectedly large impact of forest management and grazing on global vegetation biomass.
Erb, Karl-Heinz; Kastner, Thomas; Plutzar, Christoph; Bais, Anna Liza S; Carvalhais, Nuno; Fetzel, Tamara; Gingrich, Simone; Haberl, Helmut; Lauk, Christian; Niedertscheider, Maria; Pongratz, Julia; Thurner, Martin; Luyssaert, Sebastiaan
2018-01-04
Carbon stocks in vegetation have a key role in the climate system. However, the magnitude, patterns and uncertainties of carbon stocks and the effect of land use on the stocks remain poorly quantified. Here we show, using state-of-the-art datasets, that vegetation currently stores around 450 petagrams of carbon. In the hypothetical absence of land use, potential vegetation would store around 916 petagrams of carbon, under current climate conditions. This difference highlights the massive effect of land use on biomass stocks. Deforestation and other land-cover changes are responsible for 53-58% of the difference between current and potential biomass stocks. Land management effects (the biomass stock changes induced by land use within the same land cover) contribute 42-47%, but have been underestimated in the literature. Therefore, avoiding deforestation is necessary but not sufficient for mitigation of climate change. Our results imply that trade-offs exist between conserving carbon stocks on managed land and raising the contribution of biomass to raw material and energy supply for the mitigation of climate change. Efforts to raise biomass stocks are currently verifiable only in temperate forests, where their potential is limited. By contrast, large uncertainties hinder verification in the tropical forest, where the largest potential is located, pointing to challenges for the upcoming stocktaking exercises under the Paris agreement.
Yazdanparast, R; Zadeh, S Abdolhossein; Dadras, D; Azadeh, A
2018-06-01
Healthcare quality is affected by various factors including trust. Patients' trust to healthcare providers is one of the most important factors for treatment outcomes. The presented study identifies optimum mixture of patient demographic features with respect to trust in three large and busy medical centers in Tehran, Iran. The presented algorithm is composed of adaptive neuro-fuzzy inference system and statistical methods. It is used to deal with data and environmental uncertainty. The required data are collected from three large hospitals using standard questionnaires. The reliability and validity of the collected data is evaluated using Cronbach's Alpha, factor analysis and statistical tests. The results of this study indicate that middle age patients with low level of education and moderate illness severity and young patients with high level of education, moderate illness severity and moderate to weak financial status have the highest trust to the considered medical centers. To the best of our knowledge this the first study that investigates patient demographic features using adaptive neuro-fuzzy inference system in healthcare sector. Second, it is a practical approach for continuous improvement of trust features in medical centers. Third, it deals with the existing uncertainty through the unique neuro-fuzzy approach. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Jonathan H., E-mail: jonathan.h.m.davis@gmail.com
2015-03-01
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable throughmore » a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable throughmore » a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.« less
NASA Astrophysics Data System (ADS)
Li, Zhongshu; Liu, Junfeng; Mauzerall, Denise L.; Li, Xiaoyuan; Fan, Songmiao; Horowitz, Larry W.; He, Cenlin; Yi, Kan; Tao, Shu
2017-03-01
Black carbon (BC) aerosol strongly absorbs solar radiation, which warms climate. However, accurate estimation of BC’s climate effect is limited by the uncertainties of its spatiotemporal distribution, especially over remote oceanic areas. The HIAPER Pole-to-Pole Observation (HIPPO) program from 2009 to 2011 intercepted multiple snapshots of BC profiles over Pacific in various seasons, and revealed a 2 to 5 times overestimate of BC by current global models. In this study, we compared the measurements from aircraft campaigns and satellites, and found a robust association between BC concentrations and satellite-retrieved CO, tropospheric NO2, and aerosol optical depth (AOD) (R2 > 0.8). This establishes a basis to construct a satellite-based column BC approximation (sBC*) over remote oceans. The inferred sBC* shows that Asian outflows in spring bring much more BC aerosols to the mid-Pacific than those occurring in other seasons. In addition, inter-annual variability of sBC* is seen over the Northern Pacific, with abundances varying consistently with the springtime Pacific/North American (PNA) index. Our sBC* dataset infers a widespread overestimation of BC loadings and BC Direct Radiative Forcing by current models over North Pacific, which further suggests that large uncertainties exist on aerosol-climate interactions over other remote oceanic areas beyond Pacific.
Li, Zhongshu; Liu, Junfeng; Mauzerall, Denise L; Li, Xiaoyuan; Fan, Songmiao; Horowitz, Larry W; He, Cenlin; Yi, Kan; Tao, Shu
2017-03-07
Black carbon (BC) aerosol strongly absorbs solar radiation, which warms climate. However, accurate estimation of BC's climate effect is limited by the uncertainties of its spatiotemporal distribution, especially over remote oceanic areas. The HIAPER Pole-to-Pole Observation (HIPPO) program from 2009 to 2011 intercepted multiple snapshots of BC profiles over Pacific in various seasons, and revealed a 2 to 5 times overestimate of BC by current global models. In this study, we compared the measurements from aircraft campaigns and satellites, and found a robust association between BC concentrations and satellite-retrieved CO, tropospheric NO 2 , and aerosol optical depth (AOD) (R 2 > 0.8). This establishes a basis to construct a satellite-based column BC approximation (sBC*) over remote oceans. The inferred sBC* shows that Asian outflows in spring bring much more BC aerosols to the mid-Pacific than those occurring in other seasons. In addition, inter-annual variability of sBC* is seen over the Northern Pacific, with abundances varying consistently with the springtime Pacific/North American (PNA) index. Our sBC* dataset infers a widespread overestimation of BC loadings and BC Direct Radiative Forcing by current models over North Pacific, which further suggests that large uncertainties exist on aerosol-climate interactions over other remote oceanic areas beyond Pacific.
GCOS reference upper air network (GRUAN): Steps towards assuring future climate records
NASA Astrophysics Data System (ADS)
Thorne, P. W.; Vömel, H.; Bodeker, G.; Sommer, M.; Apituley, A.; Berger, F.; Bojinski, S.; Braathen, G.; Calpini, B.; Demoz, B.; Diamond, H. J.; Dykema, J.; Fassò, A.; Fujiwara, M.; Gardiner, T.; Hurst, D.; Leblanc, T.; Madonna, F.; Merlone, A.; Mikalsen, A.; Miller, C. D.; Reale, T.; Rannat, K.; Richter, C.; Seidel, D. J.; Shiotani, M.; Sisterson, D.; Tan, D. G. H.; Vose, R. S.; Voyles, J.; Wang, J.; Whiteman, D. N.; Williams, S.
2013-09-01
The observational climate record is a cornerstone of our scientific understanding of climate changes and their potential causes. Existing observing networks have been designed largely in support of operational weather forecasting and continue to be run in this mode. Coverage and timeliness are often higher priorities than absolute traceability and accuracy. Changes in instrumentation used in the observing system, as well as in operating procedures, are frequent, rarely adequately documented and their impacts poorly quantified. For monitoring changes in upper-air climate, which is achieved through in-situ soundings and more recently satellites and ground-based remote sensing, the net result has been trend uncertainties as large as, or larger than, the expected emergent signals of climate change. This is more than simply academic with the tropospheric temperature trends issue having been the subject of intense debate, two international assessment reports and several US congressional hearings. For more than a decade the international climate science community has been calling for the instigation of a network of reference quality measurements to reduce uncertainty in our climate monitoring capabilities. This paper provides a brief history of GRUAN developments to date and outlines future plans. Such reference networks can only be achieved and maintained with strong continuing input from the global metrological community.
Unexpectedly large impact of forest management and grazing on global vegetation biomass
Erb, K.-H.; Bais, A.L.S.; Carvalhais, N.; Fetzel, T.; Gingrich, S.; Haberl, H.; Lauk, C.; Niedertscheider, M.; Pongratz, J.; Thurner, M.; Luyssaert, S.
2017-01-01
Carbon stocks in vegetation play a key role in the climate system1–4, but their magnitude and patterns, their uncertainties, and the impact of land use on them remain poorly quantified. Based on a consistent integration of state-of-the art datasets, we show that vegetation currently stores ~450 PgC. In the hypothetical absence of land use, potential vegetation would store ~916 PgC, under current climate. This difference singles out the massive effect land use has on biomass stocks. Deforestation and other land-cover changes are responsible for 53-58% of the difference between current and potential biomass stocks. Land management effects, i.e. land-use induced biomass stock changes within the same land cover, contribute 42-47% but are underappreciated in the current literature. Avoiding deforestation hence is necessary but not sufficient for climate-change mitigation. Our results imply that trade-offs exist between conserving carbon stocks on managed land and raising the contribution of biomass to raw material and energy supply for climate change mitigation. Efforts to raise biomass stocks are currently only verifiable in temperate forests, where potentials are limited. In contrast, large uncertainties hamper verification in the tropical forest where the largest potentials are located, pointing to challenges for the upcoming stocktaking exercises under the Paris agreement. PMID:29258288
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
An objective Bayesian analysis of a crossover design via model selection and model averaging.
Li, Dandan; Sivaganesan, Siva
2016-11-10
Inference about the treatment effect in a crossover design has received much attention over time owing to the uncertainty in the existence of the carryover effect and its impact on the estimation of the treatment effect. Adding to this uncertainty is that the existence of the carryover effect and its size may depend on the presence of the treatment effect and its size. We consider estimation and testing hypothesis about the treatment effect in a two-period crossover design, assuming normally distributed response variable, and use an objective Bayesian approach to test the hypothesis about the treatment effect and to estimate its size when it exists while accounting for the uncertainty about the presence of the carryover effect as well as the treatment and period effects. We evaluate and compare the performance of the proposed approach with a standard frequentist approach using simulated data, and real data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The impact of land use on estimates of pesticide leaching potential: Assessments and uncertainties
NASA Astrophysics Data System (ADS)
Loague, Keith
1991-11-01
This paper illustrates the magnitude of uncertainty which can exist for pesticide leaching assessments, due to data uncertainties, both between soil orders and within a single soil order. The current work differs from previous efforts because the impact of uncertainty in recharge estimates is considered. The examples are for diuron leaching in the Pearl Harbor Basin. The results clearly indicate that land use has a significant impact on both estimates of pesticide leaching potential and the uncertainties associated with those estimates. It appears that the regulation of agricultural chemicals in the future should include consideration for changing land use.
2016-04-30
determining the optimal design requirements of a new system, which will operate along with other existing systems to provide a set of overarching...passenger airline transportation (Mane et al., 2007; Govindaraju et al., 2015). Uncertainty in Fleet Operations The uncertainty associated with the...demand can provide the basis for a commercial passenger airline problem. The operations of the commercial air travel industry differ from military
Rating curve uncertainty: A comparison of estimation methods
Mason, Jr., Robert R.; Kiang, Julie E.; Cohn, Timothy A.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The USGS is engaged in both internal development and collaborative efforts to evaluate existing methods for characterizing the uncertainty of streamflow measurements (gaugings), stage-discharge relations (ratings), and, ultimately, the streamflow records derived from them. This paper provides a brief overview of two candidate methods that may be used to characterize the uncertainty of ratings, and illustrates the results of their application to the ratings of the two USGS streamgages.
Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-06-01
Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.
Environmental trade-offs of tunnels vs cut-and-cover subways
Walton, M.
1978-01-01
Heavy construction projects in cities entail two kinds of cost - internal cost, which can be defined in terms of payments from one set of parties to another, and external cost, which is the cost borne by the community at large as the result of disutilities entailed in construction and operation. Environmental trade-offs involve external costs, which are commonly difficult to measure. Cut-and-cover subway construction probably entails higher external and internal cost than deep tunnel construction in many urban geological environments, but uncertainty concerning the costs and environmental trade-offs of tunneling leads to limited and timid use of tunneling by American designers. Thus uncertainty becomes a major trade-off which works against tunneling. The reverse is true in Sweden after nearly 30 years of subway construction. Econometric methods for measuring external costs exist in principle, but are limited in application. Economic theory based on market pressure does not address the real problem of urban environmental trade-offs. Nevertheless, the problem of uncertainty can be addressed by comparative studies of estimated and as-built costs of cut-and-cover vs tunnel projects and a review of environmental issues associated with such construction. Such a study would benefit the underground construction industry and the design of transportation systems. It would also help solve an aspect of the urban problem. ?? 1978.
An improved non-Markovian degradation model with long-term dependency and item-to-item uncertainty
NASA Astrophysics Data System (ADS)
Xi, Xiaopeng; Chen, Maoyin; Zhang, Hanwen; Zhou, Donghua
2018-05-01
It is widely noted in the literature that the degradation should be simplified into a memoryless Markovian process for the purpose of predicting the remaining useful life (RUL). However, there actually exists the long-term dependency in the degradation processes of some industrial systems, including electromechanical equipments, oil tankers, and large blast furnaces. This implies the new degradation state depends not only on the current state, but also on the historical states. Such dynamic systems cannot be accurately described by traditional Markovian models. Here we present an improved non-Markovian degradation model with both the long-term dependency and the item-to-item uncertainty. As a typical non-stationary process with dependent increments, fractional Brownian motion (FBM) is utilized to simulate the fractal diffusion of practical degradations. The uncertainty among multiple items can be represented by a random variable of the drift. Based on this model, the unknown parameters are estimated through the maximum likelihood (ML) algorithm, while a closed-form solution to the RUL distribution is further derived using a weak convergence theorem. The practicability of the proposed model is fully verified by two real-world examples. The results demonstrate that the proposed method can effectively reduce the prediction error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, K.; Itow, Y.; Rott, C., E-mail: koun@stelab.nagoya-u.ac.jp, E-mail: rott@skku.edu, E-mail: itow@stelab.nagoya-u.ac.jp
Dark matter could be captured in the Sun and self-annihilate, giving rise to an observable neutrino flux. Indirect searches for dark matter looking for this signal with neutrino telescopes have resulted in tight constraints on the interaction cross-section of dark matter with ordinary matter. We investigate how robust limits are against astro-physical uncertainties. We study the effect of the velocity distribution of dark matter in our Galaxy on capture rates in the Sun. We investigate four sources of uncertainties: orbital speed of the Sun, escape velocity of dark matter from the halo, dark matter velocity distribution functions and existence ofmore » a dark disc. We find that even extreme cases currently discussed do not decrease the sensitivity of indirect detection significantly because the capture is achieved over a broad range of the velocity distribution by integration over the velocity distribution. The effect of the uncertainty in the high-velocity tail of dark matter halo is very marginal as the capture process is rather inefficient at this region. The difference in capture rate in the Sun for various scenarios is compared to the expected change in event rates for direct detection. The possibility of co-rotating structure with the Sun can largely boost the signal and hence makes the interpretation of indirect detection conservative compared to direct detection.« less
Space radiation risk limits and Earth-Moon-Mars environmental models
NASA Astrophysics Data System (ADS)
Cucinotta, Francis A.; Hu, Shaowen; Schwadron, Nathan A.; Kozarev, K.; Townsend, Lawrence W.; Kim, Myung-Hee Y.
2010-12-01
We review NASA's short-term and career radiation limits for astronauts and methods for their application to future exploration missions outside of low Earth orbit. Career limits are intended to restrict late occurring health effects and include a 3% risk of exposure-induced death from cancer and new limits for central nervous system and heart disease risks. Short-term dose limits are used to prevent in-flight radiation sickness or death through restriction of the doses to the blood forming organs and to prevent clinically significant cataracts or skin damage through lens and skin dose limits, respectively. Large uncertainties exist in estimating the health risks of space radiation, chiefly the understanding of the radiobiology of heavy ions and dose rate and dose protraction effects, and the limitations in human epidemiology data. To protect against these uncertainties NASA estimates the 95% confidence in the cancer risk projection intervals as part of astronaut flight readiness assessments and mission design. Accurate organ dose and particle spectra models are needed to ensure astronauts stay below radiation limits and to support the goal of narrowing the uncertainties in risk projections. Methodologies for evaluation of space environments, radiation quality, and organ doses to evaluate limits are discussed, and current projections for lunar and Mars missions are described.
NASA Astrophysics Data System (ADS)
Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.
2018-01-01
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.
Di Vittorio, A. V.; Mao, J.; Shi, X.; ...
2018-01-03
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vittorio, A. V.; Mao, J.; Shi, X.
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
Are You Sure? The Role of Uncertainty in Career
ERIC Educational Resources Information Center
Trevor-Roberts, Edwin
2006-01-01
Although uncertainty is a fundamental human experience, professionals in the career field have largely overlooked the role that it plays in people's careers. The changed nature of careers has resulted in people experiencing increased uncertainty in their career that is beyond the uncertainty experienced in their job. The author explores the role…
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
Meija, Juris; Chartrand, Michelle M G
2018-01-01
Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.
Uncertainties in Past and Future Global Water Availability
NASA Astrophysics Data System (ADS)
Sheffield, J.; Kam, J.
2014-12-01
Understanding how water availability changes on inter-annual to decadal time scales and how it may change in the future under climate change are a key part of understanding future stresses on water and food security. Historic evaluations of water availability on regional to global scales are generally based on large-scale model simulations with their associated uncertainties, in particular for long-term changes. Uncertainties are due to model errors and missing processes, parameter uncertainty, and errors in meteorological forcing data. Recent multi-model inter-comparisons and impact studies have highlighted large differences for past reconstructions, due to different simplifying assumptions in the models or the inclusion of physical processes such as CO2 fertilization. Modeling of direct anthropogenic factors such as water and land management also carry large uncertainties in their physical representation and from lack of socio-economic data. Furthermore, there is little understanding of the impact of uncertainties in the meteorological forcings that underpin these historic simulations. Similarly, future changes in water availability are highly uncertain due to climate model diversity, natural variability and scenario uncertainty, each of which dominates at different time scales. In particular, natural climate variability is expected to dominate any externally forced signal over the next several decades. We present results from multi-land surface model simulations of the historic global availability of water in the context of natural variability (droughts) and long-term changes (drying). The simulations take into account the impact of uncertainties in the meteorological forcings and the incorporation of water management in the form of reservoirs and irrigation. The results indicate that model uncertainty is important for short-term drought events, and forcing uncertainty is particularly important for long-term changes, especially uncertainty in precipitation due to reduced gauge density in recent years. We also discuss uncertainties in future projections from these models as driven by bias-corrected and downscaled CMIP5 climate projections, in the context of the balance between climate model robustness and climate model diversity.
NASA Astrophysics Data System (ADS)
Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.
2017-12-01
Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.
Estimating winter wheat phenological parameters: Implications for crop modeling
USDA-ARS?s Scientific Manuscript database
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wang, Qian; Xue, Anke
2018-06-01
This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.
Aerosol Inlet Characterization Experiment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bullard, Robert L.; Kuang, Chongai; Uin, Janek
2017-05-01
The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Aerosol Observation System inlet stack was characterized for particle penetration efficiency from 10 nm to 20 μm in diameter using duplicate scanning mobility particle sizers (10 nm-450 nm), ultra-high-sensitivity aerosol spectrometers (60 nm-μm), and aerodynamic particle sizers (0.5 μm-20 μm). Results show good model-measurement agreement and unit transmission efficiency of aerosols from 10 nm to 4 μm in diameter. Large uncertainties in the measured transmission efficiency exist above 4 μm due to low ambient aerosol signal in that size range.
Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.
2017-12-01
The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.
Agent-Centric Approach for Cybersecurity Decision-Support with Partial Observability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Chatterjee, Samrat; Paulson, Patrick R.
Generating automated cyber resilience policies for real-world settings is a challenging research problem that must account for uncertainties in system state over time and dynamics between attackers and defenders. In addition to understanding attacker and defender motives and tools, and identifying “relevant” system and attack data, it is also critical to develop rigorous mathematical formulations representing the defender’s decision-support problem under uncertainty. Game-theoretic approaches involving cyber resource allocation optimization with Markov decision processes (MDP) have been previously proposed in the literature. Moreover, advancements in reinforcement learning approaches have motivated the development of partially observable stochastic games (POSGs) in various multi-agentmore » problem domains with partial information. Recent advances in cyber-system state space modeling have also generated interest in potential applicability of POSGs for cybersecurity. However, as is the case in strategic card games such as poker, research challenges using game-theoretic approaches for practical cyber defense applications include: 1) solving for equilibrium and designing efficient algorithms for large-scale, general problems; 2) establishing mathematical guarantees that equilibrium exists; 3) handling possible existence of multiple equilibria; and 4) exploitation of opponent weaknesses. Inspired by advances in solving strategic card games while acknowledging practical challenges associated with the use of game-theoretic approaches in cyber settings, this paper proposes an agent-centric approach for cybersecurity decision-support with partial system state observability.« less
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less
Burgess, Darren J
2017-04-01
Research describing load-monitoring techniques for team sport is plentiful. Much of this research is conducted retrospectively and typically involves recreational or semielite teams. Load-monitoring research conducted on professional team sports is largely observational. Challenges exist for the practitioner in implementing peer-reviewed research into the applied setting. These challenges include match scheduling, player adherence, manager/coach buy-in, sport traditions, and staff availability. External-load monitoring often attracts questions surrounding technology reliability and validity, while internal-load monitoring makes some assumptions about player adherence, as well as having some uncertainty around the impact these measures have on player performance This commentary outlines examples of load-monitoring research, discusses the issues associated with the application of this research in an elite team-sport setting, and suggests practical adjustments to the existing research where necessary.
NASA Astrophysics Data System (ADS)
Odman, M. T.; Hu, Y.; Russell, A. G.
2016-12-01
Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if spatial and temporal patterns in the differences reveal the sources of the uncertainty in the prescribed burn emission estimates.
Uncertainty in Climate Change Research: An Integrated Approach
NASA Astrophysics Data System (ADS)
Mearns, L.
2017-12-01
Uncertainty has been a major theme in research regarding climate change from virtually the very beginning. And appropriately characterizing and quantifying uncertainty has been an important aspect of this work. Initially, uncertainties were explored regarding the climate system and how it would react to future forcing. A concomitant area of concern was viewed in the future emissions and concentrations of important forcing agents such as greenhouse gases and aerosols. But, of course we know there are important uncertainties in all aspects of climate change research, not just that of the climate system and emissions. And as climate change research has become more important and of pragmatic concern as possible solutions to the climate change problem are addressed, exploring all the relevant uncertainties has become more relevant and urgent. More recently, over the past five years or so, uncertainties in impacts models, such as agricultural and hydrological models, have received much more attention, through programs such as AgMIP, and some research in this arena has indicated that the uncertainty in the impacts models can be as great or greater than that in the climate system. Still there remains other areas of uncertainty that remain underexplored and/or undervalued. This includes uncertainty in vulnerability and governance. Without more thoroughly exploring these last uncertainties, we likely will underestimate important uncertainties particularly regarding how different systems can successfully adapt to climate change . In this talk I will discuss these different uncertainties and how to combine them to give a complete picture of the total uncertainty individual systems are facing. And as part of this, I will discuss how the uncertainty can be successfully managed even if it is fairly large and deep. Part of my argument will be that large uncertainty is not the enemy, but rather false certainty is the true danger.
Towards a global harmonized permafrost soil organic carbon stock estimates.
NASA Astrophysics Data System (ADS)
Hugelius, G.; Mishra, U.; Yang, Y.
2017-12-01
Permafrost affected soils store disproportionately large amount of organic carbon stocks due to multiple cryopedogenic processes. Previous permafrost soil organic carbon (SOC) stock estimates used a variety of approaches and reported substantial uncertainty in SOC stocks of permafrost soils. Here, we used spatially referenced data of soil-forming factors (topographic attributes, land cover types, climate, and bedrock geology) and SOC pedon description data (n = 2552) in a regression kriging approach to predict the spatial and vertical heterogeneity of SOC stocks across the Northern Circumpolar and Tibetan permafrost regions. Our approach allowed us to take into account both environmental correlation and spatial autocorrelation to separately estimate SOC stocks and their spatial uncertainties (95% CI) for three depth intervals at 250 m spatial resolution. In Northern Circumpolar region, our results show 1278.1 (1009.33 - 1550.45) Pg C in 0-3 m depth interval, with 542.09 (451.83 - 610.15), 422.46 (306.48 - 550.82), and 313.55 (251.02 - 389.48) Pg C in 0 - 1, 1 - 2, and 2 - 3 m depth intervals, respectively. In Tibetan region, our results show 26.68 (9.82 - 79.92) Pg C in 0 - 3 m depth interval, with 13.98 (6.2 - 32.96), 6.49 (1.73 - 25.86), and 6.21 (1.889 - 20.90) Pg C in 0 - 1, 1 - 2, and 2 - 3 m depth intervals, respectively. Our estimates show large spatial variability (50 - 100% coefficient of variation, depending upon the study region and depth interval) and higher uncertainty range in comparison to existing estimates. We will present the observed controls of different environmental factors on SOC at the AGU meeting.
NASA Astrophysics Data System (ADS)
Pohlmann, K. F.; Zhu, J.; Ye, M.; Carroll, R. W.; Chapman, J. B.; Russell, C. E.; Shafer, D. S.
2006-12-01
Yucca Mountain (YM), Nevada has been recommended as a deep geological repository for the disposal of spent fuel and high-level radioactive waste. If YM is licensed as a repository by the Nuclear Regulatory Commission, it will be important to identify the potential for radionuclides to migrate from underground nuclear testing areas located on the Nevada Test Site (NTS) to the hydraulically downgradient repository area to ensure that monitoring does not incorrectly attribute repository failure to radionuclides originating from other sources. In this study, we use the Death Valley Regional Flow System (DVRFS) model developed by the U.S. Geological Survey to investigate potential groundwater migration pathways and associated travel times from the NTS to the proposed YM repository area. Using results from the calibrated DVRFS model and the particle tracking post-processing package MODPATH we modeled three-dimensional groundwater advective pathways in the NTS and YM region. Our study focuses on evaluating the potential for groundwater pathways between the NTS and YM withdrawal area and whether travel times for advective flow along these pathways coincide with the prospective monitoring time frame at the proposed repository. We include uncertainty in effective porosity as this is a critical variable in the determination of time for radionuclides to travel from the NTS region to the YM withdrawal area. Uncertainty in porosity is quantified through evaluation of existing site data and expert judgment and is incorporated in the model through Monte Carlo simulation. Since porosity information is limited for this region, the uncertainty is quite large and this is reflected in the results as a large range in simulated groundwater travel times.
High-resolution mapping of forest carbon stocks in the Colombian Amazon
NASA Astrophysics Data System (ADS)
Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Yepes Quintero, A. P.; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.
2012-07-01
High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or light detection and ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high-resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (> 40%) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon maps have 14% uncertainty at 1 ha resolution, and the regional map based on stratification has 28% uncertainty in any given hectare. High-resolution approaches with quantifiable pixel-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.
High-resolution Mapping of Forest Carbon Stocks in the Colombian Amazon
NASA Astrophysics Data System (ADS)
Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.
2012-03-01
High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or Light Detection and Ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (>40 %) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon mapping samples had 14.6 % uncertainty at 1 ha resolution, and regional maps based on stratification and regression approaches had 25.6 % and 29.6 % uncertainty, respectively, in any given hectare. High-resolution approaches with reported local-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision-makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.
NASA Astrophysics Data System (ADS)
Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.
2012-12-01
Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty estimates. The method is demonstrated on a Danish field site contaminated with chlorinated ethenes. For this site, we show that including a physically meaningful concentration trend and the co-simulation of hydraulic conductivity and hydraulic gradient across the transect helps constrain the mass discharge uncertainty. The number of sampling points required for accurate mass discharge estimation and the relative influence of different data types on mass discharge uncertainty is discussed.
Uncertainty estimation of simulated water levels for the Mitch flood event in Tegucigalpa
NASA Astrophysics Data System (ADS)
Fuentes Andino, Diana Carolina; Halldin, Sven; Keith, Beven; Chong-Yu, Xu
2013-04-01
Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Due to the extremely large magnitude of the Mitch flood, hydrometric measurements were not taken during the event. However, post-event indirect measurements of the discharge were obtained by the U.S. Geological Survey (USGS) and post-event surveyed high water marks were obtained by the Japan International Cooperation agency (JICA). This work proposes a methodology to simulate the water level during the Mitch event when the available data is associated with large uncertainty. The results of the two-dimensional hydrodynamic model LISFLOOD-FP will be evaluated using the Generalized Uncertainty Estimation (GLUE) framework. The main challenge in the proposed methodology is to formulate an approach to evaluate the model results when there are large uncertainties coming from both the model parameters and the evaluation data.
Uncertainty of Polarized Parton Distributions
NASA Astrophysics Data System (ADS)
Hirai, M.; Goto, Y.; Horaguchi, T.; Kobayashi, H.; Kumano, S.; Miyama, M.; Saito, N.; Shibata, T.-A.
Polarized parton distribution functions are determined by a χ2 analysis of polarized deep inelastic experimental data. In this paper, uncertainty of obtained distribution functions is investigated by a Hessian method. We find that the uncertainty of the polarized gluon distribution is fairly large. Then, we estimate the gluon uncertainty by including the fake data which are generated from prompt photon process at RHIC. We observed that the uncertainty could be reduced with these data.
Radiation health for a Mars mission
NASA Technical Reports Server (NTRS)
Robbins, Donald E.
1992-01-01
Uncertainties in risk assessments for exposure of a Mars mission crew to space radiation place limitations on mission design and operation. Large shielding penalties are imposed in order to obtain acceptable safety margins. Galactic cosmic rays (GCR) and solar particle events (SPE) are the major concern. A warning system and 'safe-haven' are needed to protect the crew from large SPE which produce lethal doses. A model developed at NASA Johnson Space Center (JSC) to describe solar modulation of GCR intensities reduces that uncertainty to less than 10 percent. Radiation transport models used to design spacecraft shielding have large uncertainties in nuclear fragmentation cross sections for GCR which interact with spacecraft materials. Planned space measurements of linear energy transfer (LET) spectra behind various shielding thicknesses will reduce uncertainties in dose-versus-shielding thickness relationships to 5-10 percent. The largest remaining uncertainty is in biological effects of space radiation. Data on effects of energetic ions in human are nonexistent. Experimental research on effects in animals and cell is needed to allow extrapolation to the risk of carcinogenesis in humans.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
Assessment of Uncertainty-Infused Scientific Argumentation
ERIC Educational Resources Information Center
Lee, Hee-Sun; Liu, Ou Lydia; Pallant, Amy; Roohr, Katrina Crotts; Pryputniewicz, Sarah; Buck, Zoë E.
2014-01-01
Though addressing sources of uncertainty is an important part of doing science, it has largely been neglected in assessing students' scientific argumentation. In this study, we initially defined a scientific argumentation construct in four structural elements consisting of claim, justification, uncertainty qualifier, and uncertainty…
NASA Technical Reports Server (NTRS)
Martin, M. W.; Kubiak, E. T.
1982-01-01
A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.
From Rupture to Resonance: Uncertainty and Scholarship in Fine Art Research Degrees
ERIC Educational Resources Information Center
Simmons, Beverley; Holbrook, Allyson
2013-01-01
This article focuses on the phenomenon of "rupture" identified in student narratives of uncertainty and scholarship experienced during the course of Fine Art research degrees in two Australian universities. Rupture captures the phenomenon of severe disruption or discontinuity in existing knowledge and typically signifies epistemological…
The state of the art of the impact of sampling uncertainty on measurement uncertainty
NASA Astrophysics Data System (ADS)
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty
NASA Astrophysics Data System (ADS)
Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.
2012-12-01
Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.
Climate change impacts on extreme events in the United States: an uncertainty analysis
Extreme weather and climate events, such as heat waves, droughts and severe precipitation events, have substantial impacts on ecosystems and the economy. However, future climate simulations display large uncertainty in mean changes. As a result, the uncertainty in future changes ...
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.
2016-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Evaluating Precipitation from Orbital Data Products of TRMM and GPM over the Indian Subcontinent
NASA Astrophysics Data System (ADS)
Jayaluxmi, I.; Kumar, D. N.
2015-12-01
The rapidly growing records of microwave based precipitation data made available from various earth observation satellites have instigated a pressing need towards evaluating the associated uncertainty which arise from different sources such as retrieval error, spatial/temporal sampling error and sensor dependent error. Pertaining to microwave remote sensing, most of the studies in literature focus on gridded data products, fewer studies exist on evaluating the uncertainty inherent in orbital data products. Evaluation of the latter are essential as they potentially cause large uncertainties during real time flood forecasting studies especially at the watershed scale. The present study evaluates the uncertainty of precipitation data derived from the orbital data products of the Tropical Rainfall Measuring Mission (TRMM) satellite namely the 2A12, 2A25 and 2B31 products. Case study results over the flood prone basin of Mahanadi, India, are analyzed for precipitation uncertainty through these three facets viz., a) Uncertainty quantification using the volumetric metrics from the contingency table [Aghakouchak and Mehran 2014] b) Error characterization using additive and multiplicative error models c) Error decomposition to identify systematic and random errors d) Comparative assessment with the orbital data from GPM mission. The homoscedastic random errors from multiplicative error models justify a better representation of precipitation estimates by the 2A12 algorithm. It can be concluded that although the radiometer derived 2A12 precipitation data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that they are in excellent agreement with the reference estimates for the data period considered [Indu and Kumar 2015]. References A. AghaKouchak and A. Mehran (2014), Extended contingency table: Performance metrics for satellite observations and climate model simulations, Water Resources Research, vol. 49, 7144-7149; J. Indu and D. Nagesh Kumar (2015), Evaluation of Precipitation Retrievals from Orbital Data Products of TRMM over a Subtropical basin in India, IEEE Transactions on Geoscience and Remote Sensing, in press, doi: 10.1109/TGRS.2015.2440338.
Lindner, Marcus; Fitzgerald, Joanne B; Zimmermann, Niklaus E; Reyer, Christopher; Delzon, Sylvain; van der Maaten, Ernst; Schelhaas, Mart-Jan; Lasch, Petra; Eggers, Jeannette; van der Maaten-Theunissen, Marieke; Suckow, Felicitas; Psomas, Achilleas; Poulter, Benjamin; Hanewinkel, Marc
2014-12-15
The knowledge about potential climate change impacts on forests is continuously expanding and some changes in growth, drought induced mortality and species distribution have been observed. However despite a significant body of research, a knowledge and communication gap exists between scientists and non-scientists as to how climate change impact scenarios can be interpreted and what they imply for European forests. It is still challenging to advise forest decision makers on how best to plan for climate change as many uncertainties and unknowns remain and it is difficult to communicate these to practitioners and other decision makers while retaining emphasis on the importance of planning for adaptation. In this paper, recent developments in climate change observations and projections, observed and projected impacts on European forests and the associated uncertainties are reviewed and synthesised with a view to understanding the implications for forest management. Current impact assessments with simulation models contain several simplifications, which explain the discrepancy between results of many simulation studies and the rapidly increasing body of evidence about already observed changes in forest productivity and species distribution. In simulation models uncertainties tend to cascade onto one another; from estimating what future societies will be like and general circulation models (GCMs) at the global level, down to forest models and forest management at the local level. Individual climate change impact studies should not be uncritically used for decision-making without reflection on possible shortcomings in system understanding, model accuracy and other assumptions made. It is important for decision makers in forest management to realise that they have to take long-lasting management decisions while uncertainty about climate change impacts are still large. We discuss how to communicate about uncertainty - which is imperative for decision making - without diluting the overall message. Considering the range of possible trends and uncertainties in adaptive forest management requires expert knowledge and enhanced efforts for providing science-based decision support. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rotorcraft control system design for uncertain vehicle dynamics using quantitative feedback theory
NASA Technical Reports Server (NTRS)
Hess, R. A.
1994-01-01
Quantitative Feedback Theory describes a frequency-domain technique for the design of multi-input, multi-output control systems which must meet time or frequency domain performance criteria when specified uncertainty exists in the linear description of the vehicle dynamics. This theory is applied to the design of the longitudinal flight control system for a linear model of the BO-105C rotorcraft. Uncertainty in the vehicle model is due to the variation in the vehicle dynamics over a range of airspeeds from 0-100 kts. For purposes of exposition, the vehicle description contains no rotor or actuator dynamics. The design example indicates the manner in which significant uncertainty exists in the vehicle model. The advantage of using a sequential loop closure technique to reduce the cost of feedback is demonstrated by example.
Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.
2018-02-01
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Some Open Issues on Rockfall Hazard Analysis in Fractured Rock Mass: Problems and Prospects
NASA Astrophysics Data System (ADS)
Ferrero, Anna Maria; Migliazza, Maria Rita; Pirulli, Marina; Umili, Gessica
2016-09-01
Risk is part of every sector of engineering design. It is a consequence of the uncertainties connected with the cognitive boundaries and with the natural variability of the relevant variables. In soil and rock engineering, in particular, uncertainties are linked to geometrical and mechanical aspects and the model used for the problem schematization. While the uncertainties due to the cognitive gaps could be filled by improving the quality of numerical codes and measuring instruments, nothing can be done to remove the randomness of natural variables, except defining their variability with stochastic approaches. Probabilistic analyses represent a useful tool to run parametric analyses and to identify the more significant aspects of a given phenomenon: They can be used for a rational quantification and mitigation of risk. The connection between the cognitive level and the probability of failure is at the base of the determination of hazard, which is often quantified through the assignment of safety factors. But these factors suffer from conceptual limits, which can be only overcome by adopting mathematical techniques with sound bases, not so used up to now (Einstein et al. in rock mechanics in civil and environmental engineering, CRC Press, London, 3-13, 2010; Brown in J Rock Mech Geotech Eng 4(3):193-204, 2012). The present paper describes the problems and the more reliable techniques used to quantify the uncertainties that characterize the large number of parameters that are involved in rock slope hazard assessment through a real case specifically related to rockfall. Limits of the existing approaches and future developments of the research are also provided.
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
The mean density and two-point correlation function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1988-01-01
The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.
Sensitivity Analysis of Expected Wind Extremes over the Northwestern Sahara and High Atlas Region.
NASA Astrophysics Data System (ADS)
Garcia-Bustamante, E.; González-Rouco, F. J.; Navarro, J.
2017-12-01
A robust statistical framework in the scientific literature allows for the estimation of probabilities of occurrence of severe wind speeds and wind gusts, but does not prevent however from large uncertainties associated with the particular numerical estimates. An analysis of such uncertainties is thus required. A large portion of this uncertainty arises from the fact that historical observations are inherently shorter that the timescales of interest for the analysis of return periods. Additional uncertainties stem from the different choices of probability distributions and other aspects related to methodological issues or physical processes involved. The present study is focused on historical observations over the Ouarzazate Valley (Morocco) and in a high-resolution regional simulation of the wind in the area of interest. The aim is to provide extreme wind speed and wind gust return values and confidence ranges based on a systematic sampling of the uncertainty space for return periods up to 120 years.
Conditional uncertainty principle
NASA Astrophysics Data System (ADS)
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
New Capitalism, Risk, and Subjectification in an Early Childhood Classroom
ERIC Educational Resources Information Center
Bialostok, Steve; Kamberelis, George
2010-01-01
"New capitalism" has been characterized as an economic period in which insecurity, flux, and uncertainty exist in the workplace. Capitalism attempts to tame that uncertainty through risk taking. Taking risks has become what one must do with risk. Economic discourses of embracing risk--thoroughly grounded in the ideologies of neoliberalism--are…
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Glassman, Nanci A.; Affelder, Linda O.; Hecht, Laura M.; Kennedy, John M.; Barclay, Rebecca O.
1993-01-01
An exploratory study was conducted that investigated the influence of technical uncertainty and project complexity on information use by U.S. industry-affiliated aerospace engineers and scientists. The study utilized survey research in the form of a self-administered mail questionnaire. U.S. aerospace engineers and scientists on the Society of Automotive Engineers (SAE) mailing list served as the study population. The adjusted response rate was 67 percent. The survey instrument is appendix C to this report. Statistically significant relationships were found to exist between technical uncertainty, project complexity, and information use. Statistically significant relationships were found to exist between technical uncertainty, project complexity, and the use of federally funded aerospace R&D. The results of this investigation are relevant to researchers investigating information-seeking behavior of aerospace engineers. They are also relevant to R&D managers and policy planners concerned with transferring the results of federally funded aerospace R&D to the U.S. aerospace industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, R.K.
1994-10-01
The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less
Operations research in intensive care unit management: a literature review.
Bai, Jie; Fügener, Andreas; Schoenfelder, Jan; Brunner, Jens O
2018-03-01
The intensive care unit (ICU) is a crucial and expensive resource largely affected by uncertainty and variability. Insufficient ICU capacity causes many negative effects not only in the ICU itself, but also in other connected departments along the patient care path. Operations research/management science (OR/MS) plays an important role in identifying ways to manage ICU capacities efficiently and in ensuring desired levels of service quality. As a consequence, numerous papers on the topic exist. The goal of this paper is to provide the first structured literature review on how OR/MS may support ICU management. We start our review by illustrating the important role the ICU plays in the hospital patient flow. Then we focus on the ICU management problem (single department management problem) and classify the literature from multiple angles, including decision horizons, problem settings, and modeling and solution techniques. Based on the classification logic, research gaps and opportunities are highlighted, e.g., combining bed capacity planning and personnel scheduling, modeling uncertainty with non-homogenous distribution functions, and exploring more efficient solution approaches.
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
Limits of computational biology.
Bray, Dennis
2015-01-01
Are we close to a complete inventory of living processes so that we might expect in the near future to reproduce every essential aspect necessary for life? Or are there mechanisms and processes in cells and organisms that are presently inaccessible to us? Here I argue that a close examination of a particularly well-understood system--that of Escherichia coli chemotaxis--shows we are still a long way from a complete description. There is a level of molecular uncertainty, particularly that responsible for fine-tuning and adaptation to myriad external conditions, which we presently cannot resolve or reproduce on a computer. Moreover, the same uncertainty exists for any process in any organism and is especially pronounced and important in higher animals such as humans. Embryonic development, tissue homeostasis, immune recognition, memory formation, and survival in the real world, all depend on vast numbers of subtle variations in cell chemistry most of which are presently unknown or only poorly characterized. Overcoming these limitations will require us to not only accumulate large quantities of highly detailed data but also develop new computational methods able to recapitulate the massively parallel processing of living cells.
NASA Astrophysics Data System (ADS)
Ménesguen, Y.; Gerlach, M.; Pollakowski, B.; Unterumsberger, R.; Haschke, M.; Beckhoff, B.; Lépy, M.-C.
2016-02-01
The knowledge of atomic fundamental parameters such as mass attenuation coefficients with low uncertainties, is of decisive importance in elemental quantification using x-ray fluorescence analysis techniques. Several databases are accessible and frequently used within a large community of users. These compilations are most often in good agreement for photon energies in the hard x-ray ranges. However, they significantly differ for low photon energies and around the absorption edges of any element. In a joint cooperation of the metrology institutes of France and Germany, mass attenuation coefficients of copper and zinc were determined experimentally in the photon energy range from 100 eV to 30 keV by independent approaches using monochromatized synchrotron radiation at SOLEIL (France) and BESSY II (Germany), respectively. The application of high-accuracy experimental techniques resulted in mass attenuation coefficient datasets determined with low uncertainties that are directly compared to existing databases. The novel datasets are expected to enhance the reliability of mass attenuation coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruskauff, Greg; Marutzky, Sam
Model evaluation focused solely on the PIN STRIPE and MILK SHAKE underground nuclear tests’ contaminant boundaries (CBs) because they had the largest extent, uncertainty, and potential consequences. The CAMBRIC radionuclide migration experiment also had a relatively large CB, but because it was constrained by transport data (notably Well UE-5n), there was little uncertainty, and radioactive decay reduced concentrations before much migration could occur. Each evaluation target and the associated data-collection activity were assessed in turn to determine whether the new data support, or demonstrate conservatism of, the CB forecasts. The modeling team—in this case, the same team that developed themore » Frenchman Flat geologic, source term, and groundwater flow and transport models—analyzed the new data and presented the results to a PER committee. Existing site understanding and its representation in numerical groundwater flow and transport models was evaluated in light of the new data and the ability to proceed to the CR stage of long-term monitoring and institutional control.« less
Water resources of the Black Sea Basin at high spatial and temporal resolution
NASA Astrophysics Data System (ADS)
Rouholahnejad, Elham; Abbaspour, Karim C.; Srinivasan, Raghvan; Bacu, Victor; Lehmann, Anthony
2014-07-01
The pressure on water resources, deteriorating water quality, and uncertainties associated with the climate change create an environment of conflict in large and complex river system. The Black Sea Basin (BSB), in particular, suffers from ecological unsustainability and inadequate resource management leading to severe environmental, social, and economical problems. To better tackle the future challenges, we used the Soil and Water Assessment Tool (SWAT) to model the hydrology of the BSB coupling water quantity, water quality, and crop yield components. The hydrological model of the BSB was calibrated and validated considering sensitivity and uncertainty analysis. River discharges, nitrate loads, and crop yields were used to calibrate the model. Employing grid technology improved calibration computation time by more than an order of magnitude. We calculated components of water resources such as river discharge, infiltration, aquifer recharge, soil moisture, and actual and potential evapotranspiration. Furthermore, available water resources were calculated at subbasin spatial and monthly temporal levels. Within this framework, a comprehensive database of the BSB was created to fill the existing gaps in water resources data in the region. In this paper, we discuss the challenges of building a large-scale model in fine spatial and temporal detail. This study provides the basis for further research on the impacts of climate and land use change on water resources in the BSB.
Liu, Ming; Xu, Yang; Mohammed, Abdul-Wahid
2016-01-01
Limited communication resources have gradually become a critical factor toward efficiency of decentralized large scale multi-agent coordination when both system scales up and tasks become more complex. In current researches, due to the agent's limited communication and observational capability, an agent in a decentralized setting can only choose a part of channels to access, but cannot perceive or share global information. Each agent's cooperative decision is based on the partial observation of the system state, and as such, uncertainty in the communication network is unavoidable. In this situation, it is a major challenge working out cooperative decision-making under uncertainty with only a partial observation of the environment. In this paper, we propose a decentralized approach that allows agents cooperatively search and independently choose channels. The key to our design is to build an up-to-date observation for each agent's view so that a local decision model is achievable in a large scale team coordination. We simplify the Dec-POMDP model problem, and each agent can jointly work out its communication policy in order to improve its local decision utilities for the choice of communication resources. Finally, we discuss an implicate resource competition game, and show that, there exists an approximate resources access tradeoff balance between agents. Based on this discovery, the tradeoff between real-time decision-making and the efficiency of cooperation using these channels can be well improved.
Li, Zhongshu; Liu, Junfeng; Mauzerall, Denise L.; Li, Xiaoyuan; Fan, Songmiao; Horowitz, Larry W.; He, Cenlin; Yi, Kan; Tao, Shu
2017-01-01
Black carbon (BC) aerosol strongly absorbs solar radiation, which warms climate. However, accurate estimation of BC’s climate effect is limited by the uncertainties of its spatiotemporal distribution, especially over remote oceanic areas. The HIAPER Pole-to-Pole Observation (HIPPO) program from 2009 to 2011 intercepted multiple snapshots of BC profiles over Pacific in various seasons, and revealed a 2 to 5 times overestimate of BC by current global models. In this study, we compared the measurements from aircraft campaigns and satellites, and found a robust association between BC concentrations and satellite-retrieved CO, tropospheric NO2, and aerosol optical depth (AOD) (R2 > 0.8). This establishes a basis to construct a satellite-based column BC approximation (sBC*) over remote oceans. The inferred sBC* shows that Asian outflows in spring bring much more BC aerosols to the mid-Pacific than those occurring in other seasons. In addition, inter-annual variability of sBC* is seen over the Northern Pacific, with abundances varying consistently with the springtime Pacific/North American (PNA) index. Our sBC* dataset infers a widespread overestimation of BC loadings and BC Direct Radiative Forcing by current models over North Pacific, which further suggests that large uncertainties exist on aerosol-climate interactions over other remote oceanic areas beyond Pacific. PMID:28266532
NASA Astrophysics Data System (ADS)
Carella, G.; Kennedy, J. J.; Berry, D. I.; Hirahara, S.; Merchant, C. J.; Morak-Bozzo, S.; Kent, E. C.
2018-01-01
Lack of reliable observational metadata represents a key barrier to understanding sea surface temperature (SST) measurement biases, a large contributor to uncertainty in the global surface record. We present a method to identify SST measurement practice by comparing the observed SST diurnal cycle from individual ships with a reference from drifting buoys under similar conditions of wind and solar radiation. Compared to existing estimates, we found a larger number of engine room-intake (ERI) reports post-World War II and in the period 1960-1980. Differences in the inferred mixture of observations lead to a systematic warmer shift of the bias adjusted SST anomalies from 1980 compared to previous estimates, while reducing the ensemble spread. Changes in mean field differences between bucket and ERI SST anomalies in the Northern Hemisphere over the period 1955-1995 could be as large as 0.5°C and are not well reproduced by current bias adjustment models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane
2015-05-01
The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less
Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Prabhu, Dinesh K.
2011-01-01
An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.
Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Prabhu, Dinesh K.
2013-01-01
An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.
Real options analysis for land use management: Methods, application, and implications for policy.
Regan, Courtney M; Bryan, Brett A; Connor, Jeffery D; Meyer, Wayne S; Ostendorf, Bertram; Zhu, Zili; Bao, Chenming
2015-09-15
Discounted cash flow analysis, including net present value is an established way to value land use and management investments which accounts for the time-value of money. However, it provides a static view and assumes passive commitment to an investment strategy when real world land use and management investment decisions are characterised by uncertainty, irreversibility, change, and adaptation. Real options analysis has been proposed as a better valuation method under uncertainty and where the opportunity exists to delay investment decisions, pending more information. We briefly review the use of discounted cash flow methods in land use and management and discuss their benefits and limitations. We then provide an overview of real options analysis, describe the main analytical methods, and summarize its application to land use investment decisions. Real options analysis is largely underutilized in evaluating land use decisions, despite uncertainty in policy and economic drivers, the irreversibility and sunk costs involved. New simulation methods offer the potential for overcoming current technical challenges to implementation as demonstrated with a real options simulation model used to evaluate an agricultural land use decision in South Australia. We conclude that considering option values in future policy design will provide a more realistic assessment of landholder investment decision making and provide insights for improved policy performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Flexible word meaning in embodied agents
NASA Astrophysics Data System (ADS)
Wellens, Peter; Loetzsch, Martin; Steels, Luc
2008-06-01
Learning the meanings of words requires coping with referential uncertainty - a learner hearing a novel word cannot be sure which aspects or properties of the referred object or event comprise the meaning of the word. Data from developmental psychology suggest that human learners grasp the important aspects of many novel words after just a few exposures, a phenomenon known as fast mapping. Traditionally, word learning is viewed as a mapping task, in which the learner has to map a set of forms onto a set of pre-existing concepts. We criticise this approach and argue instead for a flexible nature of the coupling between form and meanings as a solution to the problem of referential uncertainty. We implemented and tested the model in populations of humanoid robots that play situated language games about objects in their shared environment. Results show that the model can handle an exponential increase in uncertainty and allows scaling towards very large meaning spaces, while retaining the ability to grasp an operational meaning almost instantly for a great number of words. In addition, the model captures some aspects of the flexibility of form-meaning associations found in human languages. Meanings of words can shift between being very specific (names) and general (e.g. 'small'). We show that this specificity is biased not by the model itself but by the distribution of object properties in the world.
Evaporation estimates from the Dead Sea and their implications on its water balance
NASA Astrophysics Data System (ADS)
Oroud, Ibrahim M.
2011-12-01
The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Opportunistic mobile air pollution monitoring: A case study with city wardens in Antwerp
NASA Astrophysics Data System (ADS)
Van den Bossche, Joris; Theunis, Jan; Elen, Bart; Peters, Jan; Botteldooren, Dick; De Baets, Bernard
2016-09-01
The goal of this paper is to explore the potential of opportunistic mobile monitoring to map the exposure to air pollution in the urban environment at a high spatial resolution. Opportunistic mobile monitoring makes use of existing mobile infrastructure or people's common daily routines to move measurement devices around. Opportunistic mobile monitoring can also play a crucial role in participatory monitoring campaigns as a typical way to gather data. A case study to measure black carbon was set up in Antwerp, Belgium, with the collaboration of city employees (city wardens). The Antwerp city wardens are outdoors for a large part of the day on surveillance tours by bicycle or on foot, and gathered a total of 393 h of measurements. The data collection is unstructured both in space and time, leading to sampling bias. A temporal adjustment can only partly counteract this bias. Although a high spatial coverage was obtained, there is still a rather large uncertainty on the average concentration levels at a spatial resolution of 50 m due to a limited number of measurements and sampling bias. Despite of this uncertainty, large spatial patterns within the city are clearly captured. This study illustrates the potential of campaigns with unstructured opportunistic mobile monitoring, including participatory monitoring campaigns. The results demonstrate that such an approach can indeed be used to identify broad spatial trends over a wider area, enabling applications including hotspot identification, personal exposure studies, regression mapping, etc. But, they also emphasize the need for repeated measurements and careful processing and interpretation of the data.
Tarn, Derjung M; Paterniti, Debora A; Wenger, Neil S
2016-08-01
Little is known about how providers communicate recommendations when scientific uncertainty exists. To compare provider recommendations to those in the scientific literature, with a focus on whether uncertainty was communicated. Qualitative (inductive systematic content analysis) and quantitative analysis of previously collected audio-recorded provider-patient office visits. Sixty-one providers and a socio-economically diverse convenience sample of 603 of their patients from outpatient community- and academic-based primary care, integrative medicine, and complementary and alternative medicine provider offices in Southern California. Comparison of provider information-giving about vitamin D to professional guidelines and scientific information for which conflicting recommendations or insufficient scientific evidence exists; certainty with which information was conveyed. Ninety-two (15.3 %) of 603 visit discussions touched upon issues related to vitamin D testing, management and benefits. Vitamin D deficiency screening was discussed with 23 (25 %) patients, the definition of vitamin D deficiency with 21 (22.8 %), the optimal range for vitamin D levels with 26 (28.3 %), vitamin D supplementation dosing with 50 (54.3 %), and benefits of supplementation with 46 (50 %). For each of the professional guidelines/scientific information examined, providers conveyed information that deviated from professional guidelines and the existing scientific evidence. Of 166 statements made about vitamin D in this study, providers conveyed 160 (96.4 %) with certainty, without mention of any equivocal or contradictory evidence in the scientific literature. No uncertainty was mentioned when vitamin D dosing was discussed, even when recommended dosing was higher than guideline recommendations. Providers convey the vast majority of information and recommendations about vitamin D with certainty, even though the scientific literature contains inconsistent recommendations and declarations of inadequate evidence. Not communicating uncertainty blurs the contrast between evidence-based recommendations and those without evidence. Providers should explore best practices for involving patients in decision-making by acknowledging the uncertainty behind their recommendations.
NASA Technical Reports Server (NTRS)
Tang, Ling; Hossain, Faisal; Huffman, George J.
2010-01-01
Hydrologists and other users need to know the uncertainty of the satellite rainfall data sets across the range of time/space scales over the whole domain of the data set. Here, uncertainty' refers to the general concept of the deviation' of an estimate from the reference (or ground truth) where the deviation may be defined in multiple ways. This uncertainty information can provide insight to the user on the realistic limits of utility, such as hydrologic predictability, that can be achieved with these satellite rainfall data sets. However, satellite rainfall uncertainty estimation requires ground validation (GV) precipitation data. On the other hand, satellite data will be most useful over regions that lack GV data, for example developing countries. This paper addresses the open issues for developing an appropriate uncertainty transfer scheme that can routinely estimate various uncertainty metrics across the globe by leveraging a combination of spatially-dense GV data and temporally sparse surrogate (or proxy) GV data, such as the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and the Global Precipitation Measurement (GPM) mission Dual-Frequency Precipitation Radar. The TRMM Multi-satellite Precipitation Analysis (TMPA) products over the US spanning a record of 6 years are used as a representative example of satellite rainfall. It is shown that there exists a quantifiable spatial structure in the uncertainty of satellite data for spatial interpolation. Probabilistic analysis of sampling offered by the existing constellation of passive microwave sensors indicate that transfer of uncertainty for hydrologic applications may be effective at daily time scales or higher during the GPM era. Finally, a commonly used spatial interpolation technique (kriging), that leverages the spatial correlation of estimation uncertainty, is assessed at climatologic, seasonal, monthly and weekly timescales. It is found that the effectiveness of kriging is sensitive to the type of uncertainty metric, time scale of transfer and the density of GV data within the transfer domain. Transfer accuracy is lowest at weekly timescales with the error doubling from monthly to weekly.However, at very low GV data density (<20% of the domain), the transfer accuracy is too low to show any distinction as a function of the timescale of transfer.
Experimental results on chiral magnetic and vortical effects
Wang, Gang; Wen, Liwen
2017-01-12
Various novel transport phenomena in chiral systems result from the interplay of quantum anomalies with magnetic field and vorticity in high-energy heavy-ion collisions and could survive the expansion of the fireball and be detected in experiments. Among them are the chiral magnetic effect, the chiral vortical effect, and the chiral magnetic wave, the experimental searches for which have aroused extensive interest. As a result, the goal of this review is to describe the current status of experimental studies at Relativistic Heavy-Ion Collider at BNL and the Large Hadron Collider at CERN and to outline the future work in experiment neededmore » to eliminate the existing uncertainties in the interpretation of the data.« less
Knotts, Thomas A.
2017-01-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin
2017-04-04
The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.
Exact results for the finite time thermodynamic uncertainty relation
NASA Astrophysics Data System (ADS)
Manikandan, Sreekanth K.; Krishnamurthy, Supriya
2018-03-01
We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans
2015-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
Sensitivity of collective action to uncertainty about climate tipping points
NASA Astrophysics Data System (ADS)
Barrett, Scott; Dannenberg, Astrid
2014-01-01
Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
Kovilakam, Mahesh; Mahajan, Salil
2016-06-28
While black carbon aerosols (BC) are believed to modulate the Indian monsoons, the radiative forcing estimate of BC suffers from large uncertainties globally. In this paper, we analyze a suite of idealized experiments forced with a range of BC concentrations that span a large swath of the latest estimates of its global radiative forcing. Within those bounds of uncertainty, summer precipitation over the Indian region increases nearly linearly with the increase in BC burden. The linearity holds even as the BC concentration is increased to levels resembling those hypothesized in nuclear winter scenarios, despite large surface cooling over India andmore » adjoining regions. The enhanced monsoonal circulation is associated with a linear increase in the large-scale meridional tropospheric temperature gradient. The precipitable water over the region also increases linearly with an increase in BC burden, due to increased moisture transport from the Arabian sea to the land areas. The wide range of Indian monsoon response elicited in these experiments emphasizes the need to reduce the uncertainty in BC estimates to accurately quantify their role in modulating the Indian monsoons. Finally, the increase in monsoonal circulation in response to large BC concentrations contrasts earlier findings that the Indian summer monsoon may break down following a nuclear war.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Mahajan, Salil
While black carbon aerosols (BC) are believed to modulate the Indian monsoons, the radiative forcing estimate of BC suffers from large uncertainties globally. In this paper, we analyze a suite of idealized experiments forced with a range of BC concentrations that span a large swath of the latest estimates of its global radiative forcing. Within those bounds of uncertainty, summer precipitation over the Indian region increases nearly linearly with the increase in BC burden. The linearity holds even as the BC concentration is increased to levels resembling those hypothesized in nuclear winter scenarios, despite large surface cooling over India andmore » adjoining regions. The enhanced monsoonal circulation is associated with a linear increase in the large-scale meridional tropospheric temperature gradient. The precipitable water over the region also increases linearly with an increase in BC burden, due to increased moisture transport from the Arabian sea to the land areas. The wide range of Indian monsoon response elicited in these experiments emphasizes the need to reduce the uncertainty in BC estimates to accurately quantify their role in modulating the Indian monsoons. Finally, the increase in monsoonal circulation in response to large BC concentrations contrasts earlier findings that the Indian summer monsoon may break down following a nuclear war.« less
NASA Astrophysics Data System (ADS)
Klose, Christian D.
2013-01-01
A global catalog of small- to large-sized earthquakes was systematically analyzed to identify causality and correlatives between human-made mass shifts in the upper Earth's crust and the occurrence of earthquakes. The mass shifts, ranging between 1 kt and 1 Tt, result from large-scale geoengineering operations, including mining, water reservoirs, hydrocarbon production, fluid injection/extractions, deep geothermal energy production and coastal management. This article shows evidence that geomechanical relationships exist with statistical significance between (a) seismic moment magnitudes M of observed earthquakes, (b) lateral distances of the earthquake hypocenters to the geoengineering "operation points" and (c) mass removals or accumulations on the Earth's crust. Statistical findings depend on uncertainties, in particular, of source parameter estimations of seismic events before instrumental recoding. Statistical observations, however, indicate that every second, seismic event tends to occur after a decade. The chance of an earthquake to nucleate after 2 or 20 years near an area with a significant mass shift is 25 or 75 %, respectively. Moreover, causative effects of seismic activities highly depend on the tectonic stress regime in which the operations take place (i.e., extensive, transverse or compressive). Results are summarized as follows: First, seismic moment magnitudes increase the more mass is locally shifted on the Earth's crust. Second, seismic moment magnitudes increase the larger the area in the crust is geomechanically polluted. Third, reverse faults tend to be more trigger-sensitive than normal faults due to a stronger alteration of the minimum vertical principal stress component. Pure strike-slip faults seem to rupture randomly and independently from the magnitude of the mass changes. Finally, mainly due to high estimation uncertainties of source parameters and, in particular, of shallow seismic events (<10 km), it remains still very difficult to discriminate between induced and triggered earthquakes with respect to the data catalog of this study. However, first analyses indicate that small- to medium-sized earthquakes (
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
NASA Astrophysics Data System (ADS)
Kautz, M. A.; Keefer, T.; Demaria, E. M.; Goodrich, D. C.; Hazenberg, P.; Petersen, W. A.; Wingo, M. T.; Smith, J.
2017-12-01
The USDA - Agricultural Research Service (USDA-ARS) Long-Term Agroecosystem Research network (LTAR) is a partnership between 18 long-term research sites across the United States. As part of the program, LTAR aims to assemble a network of common sensors and measurements of hydrological, meteorological, and biophysical variables to accompany the legacy datasets of individual LTAR sites. Uncertainty remains as to how the common sensor-based measurements will compare to those measured with existing sensors at each site. The USDA-ARS Southwest Watershed Research Center (SWRC) operated Walnut Gulch Experimental Watershed (WGEW) represents the semiarid grazing lands located in southeastern Arizona in the LTAR network. The bimodal precipitation regime of this region is characterized by large-scale frontal precipitation in the winter and isolated, high-intensity, convective thunderstorms in the summer during the North American Monsoon (NAM). SWRC maintains a network of 90 rain gauges across the 150 km2 WGEW and surrounding area, with measurements dating back to the 1950's. The high intensity and isolated nature of the summer storms has historically made it difficult to quantify compared to other regimes in the US. This study assesses the uncertainty of measurement between the common LTAR Belfort All Weather Precipitation Gauge (AEPG 600) and the legacy WGEW weighing-type raingage. Additionally, in a collaboration with NASA Global Precipitation Measurement mission (GPM) and the University of Arizona a dense array of precipitation measuring sensors was installed at WGEW within a 10 meter radius for observation during the NAM, July through October 2017. In addition to two WGEW weighing-type gauges, the array includes: an AEPG 600, a tipping bucket, a weighing-bucket installed with orifice at ground level, an OTT Pluvio2 rain gauge, a Two-Dimensional Video Disdrometer (2DVD), and three OTT Parsivel2 disdrometers. An event-based comparison was made between precipitation sensors using metrics including total depth, peak intensity (1, 15, 30, and 60 minute), event duration, time to peak intensity, and start time of event. These results provide further insight into the uncertainties of measuring point-based precipitation in this unique precipitation regime and representation in large-scale observation networks.
African anthropogenic combustion emission inventory: specificities and uncertainties
NASA Astrophysics Data System (ADS)
Sekou, K.; Liousse, C.; Eric-michel, A.; Veronique, Y.; Thierno, D.; Roblou, L.; Toure, E. N.; Julien, B.
2015-12-01
Fossil fuel and biofuel emissions of gases and particles in Africa are expected to significantly increase in the near future, particularly due to the growth of African cities. In addition, African large savannah fires occur each year during the dry season, mainly for socio-economical purposes. In this study, we will present the most recent developments of African anthropogenic combustion emission inventories, stressing African specificities. (1)A regional fossil fuel and biofuel inventory for gases and particulates will be presented for Africa at a resolution of 0.25° x 0.25° from 1990 to 2012. For this purpose, the original database of Liousse et al. (2014) has been used after modification for emission factors and for updated regional fuel consumption including new emitter categories (waste burning, flaring) and new activity sectors (i.e. disaggregation of transport into sub-sectors including two wheel ). In terms of emission factors, new measured values will be presented and compared to litterature with a focus on aerosols. They result from measurement campaigns organized in the frame of DACCIWA European program for each kind of African specific anthropogenic sources in 2015, in Abidjan (Ivory Coast), Cotonou (Benin) and in Laboratoire d'Aérologie combustion chamber. Finally, a more detailed spatial distribution of emissions will be proposed at a country level to better take into account road distributions and population densities. (2) Large uncertainties still remain in biomass burning emission inventories estimates, especially over Africa between different datasets such as GFED and AMMABB. Sensitivity tests will be presented to investigate uncertainties in the emission inventories, applying methodologies used for AMMABB and GFED inventories respectively. Then, the relative importance of each sources (fossil fuel, biofuel and biomass burning inventories) on the budgets of carbon monoxide, nitrogen oxides, sulfur dioxide, black and organic carbon, and volatile organic compounds emission will be discussed for the years 1990-2012 at the region (West and Central Africa) and country (Ivory Coast and Benin) level and compared to existing inventories. Finally, a first tentative estimation of uncertainties will be conducted allowing to vary fuel consumption and emission factors for gases and particles.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin
2016-03-01
Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Using a Meniscus to Teach Uncertainty in Measurement
NASA Astrophysics Data System (ADS)
Backman, Philip
2008-02-01
I have found that students easily understand that a measurement cannot be exact, but they often seem to lack an understanding of why it is important to know something about the magnitude of the uncertainty. This tends to promote an attitude that almost any uncertainty value will do. Such indifference may exist because once an uncertainty is determined or calculated, it remains as only a number without a concrete physical connection back to the experiment. For the activity described here—presented as a challenge—groups of students are given a container and asked to make certain measurements and to estimate the uncertainty in each of those measurements. They are then challenged to complete a particular task involving the container and a volume of water. Whether the assigned task is actually achievable, however, slowly comes into question once the magnitude of the uncertainties in the original measurements is compared to the specific requirements of the challenge.
Uncertainties in land use data
NASA Astrophysics Data System (ADS)
Castilla, G.; Hay, G. J.
2006-11-01
This paper deals with the description and assessment of uncertainties in gridded land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable returning the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. There are two main uncertainties surrounding land use data, positional and categorical. This paper focuses on the second one, as the first one has in general less serious implications and is easier to tackle. The conventional method used to asess categorical uncertainty, the confusion matrix, is criticised in depth, the main critique being its inability to inform on a basic requirement to propagate uncertainty through distributed hydrological models, namely the spatial distribution of errors. Some existing alternative methods are reported, and finally the need for metadata is stressed as a more reliable means to assess the quality, and hence the uncertainty, of these data.
Illness uncertainty and treatment motivation in type 2 diabetes patients.
Apóstolo, João Luís Alves; Viveiros, Catarina Sofia Castro; Nunes, Helena Isabel Ribeiro; Domingues, Helena Raquel Faustino
2007-01-01
To characterize the uncertainty in illness and the motivation for treatment and to evaluate the existing relation between these variables in individuals with type 2 diabetes. Descriptive, correlational study, using a sample of 62 individuals in diabetes consultation sessions. The Uncertainty Stress Scale and the Treatment Self-Regulation Questionnaire were used. The individuals with type 2 diabetes present low levels of uncertainty in illness and a high motivation for treatment, with a stronger intrinsic than extrinsic motivation. A negative correlation was verified between the uncertainty in the face of the prognosis and treatment and the intrinsic motivation. These individuals are already adapted, acting according to the meanings they attribute to illness. Uncertainty can function as a threat, intervening negatively in the attribution of meaning to the events related to illness and in the process of adaptation and motivation to adhere to treatment. Intrinsic motivation seems to be essential to adhere to treatment.
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
Yao, Shuai-Lei; Luo, Jing-Jia; Huang, Gang
2016-01-01
Regional climate projections are challenging because of large uncertainty particularly stemming from unpredictable, internal variability of the climate system. Here, we examine the internal variability-induced uncertainty in precipitation and surface air temperature (SAT) trends during 2005-2055 over East Asia based on 40 member ensemble projections of the Community Climate System Model Version 3 (CCSM3). The model ensembles are generated from a suite of different atmospheric initial conditions using the same SRES A1B greenhouse gas scenario. We find that projected precipitation trends are subject to considerably larger internal uncertainty and hence have lower confidence, compared to the projected SAT trends in both the boreal winter and summer. Projected SAT trends in winter have relatively higher uncertainty than those in summer. Besides, the lower-level atmospheric circulation has larger uncertainty than that in the mid-level. Based on k-means cluster analysis, we demonstrate that a substantial portion of internally-induced precipitation and SAT trends arises from internal large-scale atmospheric circulation variability. These results highlight the importance of internal climate variability in affecting regional climate projections on multi-decadal timescales.
Quantum issues in optical communication. [noise reduction in signal reception
NASA Technical Reports Server (NTRS)
Kennedy, R. S.
1973-01-01
Various approaches to the problem of controlling quantum noise, the dominant noise in an optical communications system, are discussed. It is shown that, no matter which way the problem is approached, there always remain uncertainties. These uncertainties exist because, to date, only very few communication problems have been solved in their full quantum form.
I Am Sure There May Be a Planet There: Student Articulation of Uncertainty in Argumentation Tasks
ERIC Educational Resources Information Center
Buck, Zoë E.; Lee, Hee-Sun; Flores, Joanna
2014-01-01
We investigated how students articulate uncertainty when they are engaged in structured scientific argumentation tasks where they generate, examine, and interpret data to determine the existence of exoplanets. In this study, 302 high school students completed 4 structured scientific arguments that followed a series of computer-model-based…
Characterising large scenario earthquakes and their influence on NDSHA maps
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.
2016-04-01
The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.
NASA Astrophysics Data System (ADS)
Su, X.; Takahashi, K.; Fujimori, S.; Hasegawa, T.; Tanaka, K.; Shiogama, H.; Emori, S.; LIU, J.; Hanasaki, N.; Hijioka, Y.; Masui, T.
2017-12-01
Large uncertainty exists in the temperature projections, including contributions from carbon cycle, climate system and aerosols. For the integrated assessment models (IAMs), like DICE, FUND and PAGE, however, the scientific uncertainties mainly rely on the distribution of (equilibrium) climate sensitivity. This study aims at evaluating the emission pathways by limiting temperature increase below 2.0 ºC or 1.5 ºC after 2100 considering scientific uncertainties, and exploring how socioeconomic indicators are affected by such scientific uncertainties. We use a stochastic version of the SCM4OPT, with an uncertainty measurement by considering alternative ranges of key parameters. Three climate cases, namely, i) base case of SSP2, ii) limiting temperature increase below 2.0 ºC after 2100 and iii) limiting temperature increase below 1.5 ºC after 2100, and three types of probabilities - i) >66% probability or likely, ii) >50% probability or more likely than not and iii) the mean of the probability distribution, are considered in the study. The results show that, i) for the 2.0ºC case, the likely CO2 reduction rate in 2100 ranges from 75.5%-102.4%, with mean value of 88.1%, and 93.0%-113.1% (mean 102.5%) for the 1.5ºC case; ii) a likely range of forcing effect is found for the 2.0 ºC case (2.7-3.9 Wm-2) due to scientific uncertainty, and 1.9-3.1 Wm-2 for the 1.5 ºC case; iii) the carbon prices within 50% confidential interval may differ a factor of 3 for both the 2.0ºC case and the 1.5 ºC case; iv) the abatement costs within 50% confidential interval may differ a factor of 4 for both the 2.0ºC case and the 1.5 ºC case. Nine C4MIP carbon cycle models and nineteen CMIP3 AOGCMs are used to account for the scientific uncertainties, following MAGICC 6.0. These uncertainties will result in a likely radiative forcing range of 6.1-7.5 Wm-2 and a likely temperature increase of 3.1-4.5 ºC in 2100 for the base case of SSP2. If we evaluate the 2 ºC target by limiting the temperature increase, a likely difference of up to 20.7 GtCO2-eq greenhouse gases (GHGs) in 2100 will occur in the assessment, or 14.4 GtCO2-eq GHGs difference for the 1.5 ºC case. The scientific uncertainties have significant impacts on evaluating costs of climate change and an appropriate representation of such uncertainties is important in the socioeconomic assessment.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Steering the measured uncertainty under decoherence through local PT -symmetric operations
NASA Astrophysics Data System (ADS)
Shi, Wei-Nan; Wang, Dong; Sun, Wen-Yang; Ming, Fei; Huang, Ai-Jun; Ye, Liu
2018-07-01
The uncertainty principle is viewed as one of the appealing properties in the context of quantum mechanics, which intrinsically offers a lower bound with regard to the measurement outcomes of a pair of incompatible observables within a given system. In this letter, we attempt to observe entropic uncertainty in the presence of quantum memory under different local noisy channels. To be specific, we develop the dynamics of the measured uncertainty under local bit-phase-flipping (unital) and depolarization (nonunital) noise, respectively, and attractively put forward an effective strategy to manipulate its magnitude of the uncertainty of interest by means of parity-time symmetric (-symmetric) operations on the subsystem to be measured. It is interesting to find that there exist different evolution characteristics of the uncertainty in the channels considered here, i.e. the monotonic behavior in the nonunital channels, and the non-monotonic behavior in the unital channels. Moreover, the amount of the measured uncertainty can be reduced to some degree by properly modulating the -symmetric operations.
Religion in the face of uncertainty: an uncertainty-identity theory account of religiousness.
Hogg, Michael A; Adelman, Janice R; Blagg, Robert D
2010-02-01
The authors characterize religions as social groups and religiosity as the extent to which a person identifies with a religion, subscribes to its ideology or worldview, and conforms to its normative practices. They argue that religions have attributes that make them well suited to reduce feelings of self-uncertainty. According to uncertainty-identity theory, people are motivated to reduce feelings of uncertainty about or reflecting on self; and identification with groups, particularly highly entitative groups, is a very effective way to reduce uncertainty. All groups provide belief systems and normative prescriptions related to everyday life. However, religions also address the nature of existence, invoking sacred entities and associated rituals and ceremonies. They are entitative groups that provide a moral compass and rules for living that pervade a person's life, making them particularly attractive in times of uncertainty. The authors document data supporting their analysis and discuss conditions that transform religiosity into religious zealotry and extremism.
Simulating the Stability of Colloidal Amorphous Iron Oxide in Natural Water
Considerable uncertainty exists as to whether existing thermodynamic equilibrium solid/water partitioning paradigms can be used to assess the mobility of insoluble manufactured nanomaterials in the aquatic environment. In this work, the traditional Derjaguin–Landau–Verwey–Overbee...
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms
NASA Technical Reports Server (NTRS)
Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.
2016-01-01
Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments
Testing the Millennial-Scale Holocene Solar-Climate Connection in the Indo-Pacific Warm Pool
NASA Astrophysics Data System (ADS)
Khider, D.; Emile-Geay, J.; McKay, N.; Jackson, C. S.; Routson, C.
2016-12-01
The existence of 1000 and 2500-year periodicities found in reconstructions of total solar irradiance (TSI) and a number of Holocene climate records has led to the hypothesis of a causal relationship. However, attributing Holocene millennial-scale variability to solar forcing requires a mechanism by which small changes in total irradiance can influence a global climate response. One possible amplifier within the climate system is the ocean. If this is the case, then we need to know more about where and how this may be occurring. On the other hand, the similarity in spectral peaks could be merely coincidental, and this should be made apparent by a lack of coherence in how that power and phasing are distributed in time and space. The plausibility of the solar forcing hypothesis is assessed through a Bayesian model of the age uncertainties affecting marine sedimentary records that is propagated through spectral analysis of the climate and forcing signals at key frequencies. Preliminary work on Mg/Ca and alkenone records from the Indo-Pacific Warm Pool suggests that despite large uncertainties in the location of the spectral peaks within each individual record arising from age model uncertainty, sea surface variability on timescales of 1025±36 years and 2427±133 years (±standard error of the mean of the median periodicity in each record) are present in at least 95% and 70% of the ensemble spectra, respectively. However, we find a long phase delay between the peak in forcing and the maximum response in at least one of the records, challenging the solar forcing hypothesis and requiring further investigation between low- and high-latitude signals. Remarkably, all records suggest a periodicity near 1470±85 years, reminiscent of the cycles characteristic of Marine Isotope Stage 3; these cycles are absent from existing records of TSI, further questioning the millennial solar-climate connection.
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world's oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1 kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Palmer, Cameron; Pe’er, Itsik
2016-01-01
Missing data are an unavoidable component of modern statistical genetics. Different array or sequencing technologies cover different single nucleotide polymorphisms (SNPs), leading to a complicated mosaic pattern of missingness where both individual genotypes and entire SNPs are sporadically absent. Such missing data patterns cannot be ignored without introducing bias, yet cannot be inferred exclusively from nonmissing data. In genome-wide association studies, the accepted solution to missingness is to impute missing data using external reference haplotypes. The resulting probabilistic genotypes may be analyzed in the place of genotype calls. A general-purpose paradigm, called Multiple Imputation (MI), is known to model uncertainty in many contexts, yet it is not widely used in association studies. Here, we undertake a systematic evaluation of existing imputed data analysis methods and MI. We characterize biases related to uncertainty in association studies, and find that bias is introduced both at the imputation level, when imputation algorithms generate inconsistent genotype probabilities, and at the association level, when analysis methods inadequately model genotype uncertainty. We find that MI performs at least as well as existing methods or in some cases much better, and provides a straightforward paradigm for adapting existing genotype association methods to uncertain data. PMID:27310603
NASA Astrophysics Data System (ADS)
Dekker, Iris N.; Houweling, Sander; Aben, Ilse; Röckmann, Thomas; Krol, Maarten; Martínez-Alonso, Sara; Deeter, Merritt N.; Worden, Helen M.
2017-12-01
The growth of mega-cities leads to air quality problems directly affecting the citizens. Satellite measurements are becoming of higher quality and quantity, which leads to more accurate satellite retrievals of enhanced air pollutant concentrations over large cities. In this paper, we compare and discuss both an existing and a new method for estimating urban-scale trends in CO emissions using multi-year retrievals from the MOPITT satellite instrument. The first method is mainly based on satellite data, and has the advantage of fewer assumptions, but also comes with uncertainties and limitations as shown in this paper. To improve the reliability of urban-to-regional scale emission trend estimation, we simulate MOPITT retrievals using the Weather Research and Forecast model with chemistry core (WRF-Chem). The difference between model and retrieval is used to optimize CO emissions in WRF-Chem, focusing on the city of Madrid, Spain. This method has the advantage over the existing method in that it allows both a trend analysis of CO concentrations and a quantification of CO emissions. Our analysis confirms that MOPITT is capable of detecting CO enhancements over Madrid, although significant differences remain between the yearly averaged model output and satellite measurements (R2 = 0.75) over the city. After optimization, we find Madrid CO emissions to be lower by 48 % for 2002 and by 17 % for 2006 compared with the EdgarV4.2 emission inventory. The MOPITT-derived emission adjustments lead to better agreement with the European emission inventory TNO-MAC-III for both years. This suggests that the downward trend in CO emissions over Madrid is overestimated in EdgarV4.2 and more realistically represented in TNO-MACC-III. However, our satellite and model based emission estimates have large uncertainties, around 20 % for 2002 and 50 % for 2006.
NASA Astrophysics Data System (ADS)
He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.
2015-12-01
Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.
Tang, Yongchuan; Zhou, Deyun; Chan, Felix T S
2018-06-11
Quantification of uncertain degree in the Dempster-Shafer evidence theory (DST) framework with belief entropy is still an open issue, even a blank field for the open world assumption. Currently, the existed uncertainty measures in the DST framework are limited to the closed world where the frame of discernment (FOD) is assumed to be complete. To address this issue, this paper focuses on extending a belief entropy to the open world by considering the uncertain information represented as the FOD and the nonzero mass function of the empty set simultaneously. An extension to Deng’s entropy in the open world assumption (EDEOW) is proposed as a generalization of the Deng’s entropy and it can be degenerated to the Deng entropy in the closed world wherever necessary. In order to test the reasonability and effectiveness of the extended belief entropy, an EDEOW-based information fusion approach is proposed and applied to sensor data fusion under uncertainty circumstance. The experimental results verify the usefulness and applicability of the extended measure as well as the modified sensor data fusion method. In addition, a few open issues still exist in the current work: the necessary properties for a belief entropy in the open world assumption, whether there exists a belief entropy that satisfies all the existed properties, and what is the most proper fusion frame for sensor data fusion under uncertainty.
NASA Astrophysics Data System (ADS)
Blanco, E. L.; Lund, M.; Williams, M. D.; Christensen, T. R.; Tamstorf, M. P.
2015-12-01
An improvement in our process-based understanding of CO2 exchanges in the Arctic, and their climate sensitivity, is critical for examining the role of tundra ecosystems in changing climates. Arctic organic carbon storage has seen increased attention in recent years due to large potential for carbon releases following thaw. Our knowledge about the exact scale and sensitivity for a phase-change of these C stocks are, however, limited. Minor variations in Gross Primary Production (GPP) and Ecosystem Respiration (Reco) driven by changes in the climate can lead to either C sink or C source states, which likely will impact the overall C cycle of the ecosystem. Eddy covariance data is usually used to partition Net Ecosystem Exchange (NEE) into GPP and Reco achieved by flux separation algorithms. However, different partitioning approaches lead to different estimates. as well as undefined uncertainties. The main objectives of this study are to use model-data fusion approaches to (1) determine the inter-annual variability in C source/sink strength for an Arctic fen, and attribute such variations to GPP vs Reco, (2) investigate the climate sensitivity of these processes and (3) explore the uncertainties in NEE partitioning. The intention is to elaborate on the information gathered in an existing catchment area under an extensive cross-disciplinary ecological monitoring program in low Arctic West Greenland, established under the auspices of the Greenland Ecosystem Monitoring (GEM) program. The use of such a thorough long-term (7 years) dataset applied to the exploration in inter-annual variability of carbon exchange, related driving factors and NEE partition uncertainties provides a novel input into our understanding about land-atmosphere CO2 exchange.
Zhao, Wei; Ji, Songbai
2017-04-01
Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29-17.89 % in the whole brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9 %) but not pattern (correlation coefficient of 0.94-0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91 % on average, with a typical range of 0-6 %). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future.
Zhao, Wei; Ji, Songbai
2016-01-01
Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles, and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction, and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29–17.89% in the whole-brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9%) but not pattern (correlation coefficient of 0.94–0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91% on average, with a typical range of 0–6%). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future. PMID:27644441
How good a clock is rotation? The stellar rotation-mass-age relationship for old field stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, Courtney R.; Pinsonneault, Marc H., E-mail: epstein@astronomy.ohio-state.edu, E-mail: pinsono@astronomy.ohio-state.edu
2014-01-10
The rotation-mass-age relationship offers a promising avenue for measuring the ages of field stars, assuming the attendant uncertainties to this technique can be well characterized. We model stellar angular momentum evolution starting with a rotation distribution from open cluster M37. Our predicted rotation-mass-age relationship shows significant zero-point offsets compared to an alternative angular momentum loss law and published gyrochronology relations. Systematic errors at the 30% level are permitted by current data, highlighting the need for empirical guidance. We identify two fundamental sources of uncertainty that limit the precision of rotation-based ages and quantify their impact. Stars are born with amore » range of rotation rates, which leads to an age range at fixed rotation period. We find that the inherent ambiguity from the initial conditions is important for all young stars, and remains large for old stars below 0.6 M {sub ☉}. Latitudinal surface differential rotation also introduces a minimum uncertainty into rotation period measurements and, by extension, rotation-based ages. Both models and the data from binary star systems 61 Cyg and α Cen demonstrate that latitudinal differential rotation is the limiting factor for rotation-based age precision among old field stars, inducing uncertainties at the ∼2 Gyr level. We also examine the relationship between variability amplitude, rotation period, and age. Existing ground-based surveys can detect field populations with ages as old as 1-2 Gyr, while space missions can detect stars as old as the Galactic disk. In comparison with other techniques for measuring the ages of lower main sequence stars, including geometric parallax and asteroseismology, rotation-based ages have the potential to be the most precise chronometer for 0.6-1.0 M {sub ☉} stars.« less
Ground Motion Uncertainty and Variability (single-station sigma): Insights from Euroseistest, Greece
NASA Astrophysics Data System (ADS)
Ktenidou, O. J.; Roumelioti, Z.; Abrahamson, N. A.; Cotton, F.; Pitilakis, K.
2014-12-01
Despite recent improvements in networks and data, the global aleatory uncertainty (sigma) in GMPEs is still large. One reason is the ergodic approach, where we combine data in space to make up for lack of data in time. By estimating the systematic site response, we can make site-specific GMPEs and use a lower, site-specific uncertainty: single-station sigma. In this study we use the EUROSEISTEST database (http://euroseisdb.civil.auth.gr), which has two distinct advantages: good existing knowledge of site conditions at all stations, and careful relocation of the recorded events. Constraining the site and source parameters as best we can, we minimise the within- and between-events components of the global, ergodic sigma. Following that, knowledge of the site response from empirical and theoretical approaches permits us to move on to single-station sigma. The variability per site is not clearly correlated to the site class. We show that in some cases knowledge of Vs30 is not sufficient, and that site-specific data are needed to capture the response, possibly due to 2D/3D effects from complex geometry. Our values of single-station sigma are low compared to the literature. This may be due to the good ray coverage we have in all directions for small, nearby records. Indeed, our single-station sigma values are similar to published single-path values, which means that they may correspond to a fully -rather than partially- non-ergodic approach. We find larger ground motion variability for short distances and small magnitudes. This may be related to the uncertainty in the depth affecting nearby records more, or to stress drop and causing trade-offs between the source and site terms for small magnitudes.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Venable, Demetrius D.; Walker, Monique; Cardirola, Martin; Sakai, Tetsu; Veselovskii, Igor
2013-01-01
Narrow-band detection of the Raman water vapor spectrum using the lidar technique introduces a concern over the temperature dependence of the Raman spectrum. Various groups have addressed this issue either by trying to minimize the temperature dependence to the point where it can be ignored or by correcting for whatever degree of temperature dependence exists. The traditional technique for performing either of these entails accurately measuring both the laser output wavelength and the water vapor spectral passband with combined uncertainty of approximately 0.01 nm. However, uncertainty in interference filter center wavelengths and laser output wavelengths can be this large or larger. These combined uncertainties translate into uncertainties in the magnitude of the temperature dependence of the Raman lidar water vapor measurement of 3% or more. We present here an alternate approach for accurately determining the temperature dependence of the Raman lidar water vapor measurement. This alternate approach entails acquiring sequential atmospheric profiles using the lidar while scanning the channel passband across portions of the Raman water vapor Q-branch. This scanning is accomplished either by tilt-tuning an interference filter or by scanning the output of a spectrometer. Through this process a peak in the transmitted intensity can be discerned in a manner that defines the spectral location of the channel passband with respect to the laser output wavelength to much higher accuracy than that achieved with standard laboratory techniques. Given the peak of the water vapor signal intensity curve, determined using the techniques described here, and an approximate knowledge of atmospheric temperature, the temperature dependence of a given Raman lidar profile can be determined with accuracy of 0.5% or better. A Mathematica notebook that demonstrates the calculations used here is available from the lead author.
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Uncertainty in eddy covariance measurements and its application to physiological models
D.Y. Hollinger; A.D. Richardson; A.D. Richardson
2005-01-01
Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...
The influence of lateral Earth structure on glacial isostatic adjustment in Greenland
NASA Astrophysics Data System (ADS)
Milne, Glenn A.; Latychev, Konstantin; Schaeffer, Andrew; Crowley, John W.; Lecavalier, Benoit S.; Audette, Alexandre
2018-05-01
We present the first results that focus on the influence of lateral Earth structure on Greenland glacial isostatic adjustment (GIA) using a model that can explicitly incorporate 3-D Earth structure. In total, eight realisations of lateral viscosity structure were developed using four global seismic velocity models and two global lithosphere (elastic) thickness models. Our results show that lateral viscosity structure has a significant influence on model output of both deglacial relative sea level (RSL) changes and present-day rates of vertical land motion. For example, lateral structure changes the RSL predictions in the Holocene by several 10 s of metres in many locations relative to the 1-D case. Modelled rates of vertical land motion are also significantly affected, with differences from the 1-D case commonly at the mm/yr level and exceeding 2 mm/yr in some locations. The addition of lateral structure was unable to account for previously identified data-model RSL misfits in northern and southern Greenland, suggesting limitations in the adopted ice model (Lecavalier et al. 2014) and/or the existence of processes not included in our model. Our results show large data-model discrepancies in uplift rates when applying a 1-D viscosity model tuned to fit the RSL data; these discrepancies cannot be reconciled by adding the realisations of lateral structure considered here. In many locations, the spread in model output for the eight different 3-D Earth models is of similar amplitude or larger than the influence of lateral structure (as defined by the average of all eight model runs). This reflects the differences between the four seismic and two lithosphere models used and implies a large uncertainty in defining the GIA signal given that other aspects that contribute to this uncertainty (e.g. scaling from seismic velocity to viscosity) were not considered in this study. In order to reduce this large model uncertainty, an important next step is to develop more accurate constraints on Earth structure beneath Greenland based on regional geophysical data sets.
Susanne Winter; Andreas Böck; Ronald E. McRoberts
2012-01-01
Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...
Inferring terrestrial photosynthetic light use efficiency of temperate ecosystems from space
Thomas Hilker; Nicholas C. Coops; Forest G. Hall; Caroline J. Nichol; Alexei Lyapustin; T. Andrew Black; Michael A. Wulder; Ray Leuning; Alan Barr; David Y. Hollinger; Bill Munger; Compton J. Tucker
2011-01-01
Terrestrial ecosystems absorb about 2.8 Gt C yrâ1, which is estimated to be about a quarter of the carbon emitted from fossil fuel combustion. However, the uncertainties of this sink are large, on the order of ±40%, with spatial and temporal variations largely unknown. One of the largest factors contributing to the uncertainty is photosynthesis,...
Paige F. B. Ferguson; Michael J. Conroy; John F. Chamblee; Jeffrey Hepinstall-Cymerman
2015-01-01
Parcelization and forest fragmentation are of concern for ecological, economic, and social reasons. Efforts to keep large, private forests intact may be supported by a decision-making process that incorporates landownersâ objectives and uncertainty. We used structured decision making (SDM) with owners of large, private forests in Macon County, North Carolina....
Uncertainty during breast diagnostic evaluation: state of the science.
Montgomery, Mariann
2010-01-01
To present the state of the science on uncertainty in relationship to the experiences of women undergoing diagnostic evaluation for suspected breast cancer. Published articles from Medline, CINAHL, PubMED, and PsycINFO from 1983-2008 using the following key words: breast biopsy, mammography, uncertainty, reframing, inner strength, and disruption. Fifty research studies were examined with all reporting the presence of anxiety persisting throughout the diagnostic evaluation until certitude is achieved through the establishment of a definitive diagnosis. Indirect determinants of uncertainty for women undergoing breast diagnostic evaluation include measures of anxiety, depression, social support, emotional responses, defense mechanisms, and the psychological impact of events. Understanding and influencing the uncertainty experience have been suggested to be key in relieving psychosocial distress and positively influencing future screening behaviors. Several studies examine correlational relationships among anxiety, selection of coping methods, and demographic factors that influence uncertainty. A gap exists in the literature with regard to the relationship of inner strength and uncertainty. Nurses can be invaluable in assisting women in coping with the uncertainty experience by providing positive communication and support. Nursing interventions should be designed and tested for their effects on uncertainty experienced by women undergoing a breast diagnostic evaluation.
Uncertainty Quantification of Multi-Phase Closures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Baglietto, Emilio
In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures.more » The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using Dakota) to study sensitivities of the CFD representation (STARCCM+) of uid velocity pro les and void fraction pro les in the context of Shaver and Podowski, 2015 correction to lift, and the Lubchenko et al., 2017 formulation of wall lubrication.« less
NASA Astrophysics Data System (ADS)
Qi, W.; Zhang, C.; Fu, G.; Sweetapple, C.; Zhou, H.
2016-02-01
The applicability of six fine-resolution precipitation products, including precipitation radar, infrared, microwave and gauge-based products, using different precipitation computation recipes, is evaluated using statistical and hydrological methods in northeastern China. In addition, a framework quantifying uncertainty contributions of precipitation products, hydrological models, and their interactions to uncertainties in ensemble discharges is proposed. The investigated precipitation products are Tropical Rainfall Measuring Mission (TRMM) products (TRMM3B42 and TRMM3B42RT), Global Land Data Assimilation System (GLDAS)/Noah, Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and a Global Satellite Mapping of Precipitation (GSMAP-MVK+) product. Two hydrological models of different complexities, i.e. a water and energy budget-based distributed hydrological model and a physically based semi-distributed hydrological model, are employed to investigate the influence of hydrological models on simulated discharges. Results show APHRODITE has high accuracy at a monthly scale compared with other products, and GSMAP-MVK+ shows huge advantage and is better than TRMM3B42 in relative bias (RB), Nash-Sutcliffe coefficient of efficiency (NSE), root mean square error (RMSE), correlation coefficient (CC), false alarm ratio, and critical success index. These findings could be very useful for validation, refinement, and future development of satellite-based products (e.g. NASA Global Precipitation Measurement). Although large uncertainty exists in heavy precipitation, hydrological models contribute most of the uncertainty in extreme discharges. Interactions between precipitation products and hydrological models can have the similar magnitude of contribution to discharge uncertainty as the hydrological models. A better precipitation product does not guarantee a better discharge simulation because of interactions. It is also found that a good discharge simulation depends on a good coalition of a hydrological model and a precipitation product, suggesting that, although the satellite-based precipitation products are not as accurate as the gauge-based products, they could have better performance in discharge simulations when appropriately combined with hydrological models. This information is revealed for the first time and very beneficial for precipitation product applications.
Linde, Niklas; Ricci, Tullio; Baron, Ludovic; Shakas, Alexis; Berrino, Giovanna
2017-08-16
Existing 3-D density models of the Somma-Vesuvius volcanic complex (SVVC), Italy, largely disagree. Despite the scientific and socioeconomic importance of Vesuvius, there is no reliable 3-D density model of the SVVC. A considerable uncertainty prevails concerning the presence (or absence) of a dense body underlying the Vesuvius crater (1944 eruption) that is implied from extensive seismic investigations. We have acquired relative gravity measurements at 297 stations, including measurements in difficult-to-access areas (e.g., the first-ever measurements in the crater). In agreement with seismic investigations, the simultaneous inversion of these and historic data resolves a high-density body that extends from the surface of the Vesuvius crater down to depths that exceed 2 km. A 1.5-km radius horseshoe-shaped dense feature (open in the southwestern sector) enforces the existing model of groundwater circulation within the SVVC. Based on its volcano-tectonic evolution, we interpret volcanic structures that have never been imaged before.
Report on the survey for electrostatic discharges on Mars using NASA's Deep Space Network (DSN)
NASA Astrophysics Data System (ADS)
Arabshahi, S.; Majid, W.; Geldzahler, B.; Kocz, J.; Schulter, T.; White, L.
2017-12-01
Mars atmosphere has strong dust activity. It is suggested that the larger regional storms are capable of producing electric fields large enough to initiate electrostatic discharges. The storms have charging process similar to terrestrial dust devils and have hot cores and complicated vortex winds similar to terrestrial thunderstorms. However, due to uncertainties in our understanding of the electrical environment of the storms and absence of related in-situ measurements, the existence (or non-existence) of such electrostatic discharges on the planet is yet to be confirmed. Knowing about the electrical activity on Mars is essential for future human explorations of the planet. We have recently launched a long-term monitoring campaign at NASA's Madrid Deep Space Communication Complex (MDSCC) to search for powerful discharges on Mars. The search occurs during routine tracking of Mars orbiting spacecraft by Deep Space Network (DSN) radio telescope. In this presentation, we will report on the result of processing and analysis of the data from the first six months of our campaign.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, William J.
2000-04-01
Theoretical considerations and empirical data suggest that existing technologies and procedures can improve indoor environments in a manner that significantly increases productivity and health. Existing literature contains moderate to strong evidence that characteristics of buildings and indoor environments significantly influence rates of communicable respiratory illness, allergy and asthma symptoms, sick building symptoms, and worker performance. While there is considerable uncertainty in the estimates of the magnitudes of productivity gains that may be obtained by providing better indoor environments, the projected gains are very large. For the U.S., the estimated potential annual savings and productivity gains are $6 to $14 billionmore » from reduced respiratory disease, $2 to $4 billion from reduced allergies and asthma, $10 to $30 billion from reduced sick building syndrome symptoms, and $20 to $160 billion from direct improvements in worker performance that are unrelated to health. Productivity gains that are quantified and demonstrated could serve as a strong stimulus for energy efficiency measures that simultaneously improve the indoor environment.« less
Reassessing Pliocene temperature gradients
NASA Astrophysics Data System (ADS)
Tierney, J. E.
2017-12-01
With CO2 levels similar to present, the Pliocene Warm Period (PWP) is one of our best analogs for climate change in the near future. Temperature proxy data from the PWP describe dramatically reduced zonal and meridional temperature gradients that have proved difficult to reproduce with climate model simulations. Recently, debate has emerged regarding the interpretation of the proxies used to infer Pliocene temperature gradients; these interpretations affect the magnitude of inferred change and the degree of inconsistency with existing climate model simulations of the PWP. Here, I revisit the issue using Bayesian proxy forward modeling and prediction that propagates known uncertainties in the Mg/Ca, UK'37, and TEX86 proxy systems. These new spatiotemporal predictions are quantitatively compared to PWP simulations to assess probabilistic agreement. Results show generally good agreement between existing Pliocene simulations from the PlioMIP ensemble and SST proxy data, suggesting that exotic changes in the ocean-atmosphere are not needed to explain the Pliocene climate state. Rather, the spatial changes in SST during the Pliocene are largely consistent with elevated CO2 forcing.
Adaptive Control for Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Yang, Bong-Jun; Calise, Anthony J.; Craig, James I.; Whorton, Mark S.
2005-01-01
Most active vibration isolation systems that try to a provide quiescent acceleration environment for space science experiments have utilized linear design methods. In this paper, we address adaptive control augmentation of an existing classical controller that employs a high-gain acceleration feedback together with a low-gain position feedback to center the isolated platform. The control design feature includes parametric and dynamic uncertainties because the hardware of the isolation system is built as a payload-level isolator, and the acceleration Sensor exhibits a significant bias. A neural network is incorporated to adaptively compensate for the system uncertainties, and a high-pass filter is introduced to mitigate the effect of the measurement bias. Simulations show that the adaptive control improves the performance of the existing acceleration controller and keep the level of the isolated platform deviation to that of the existing control system.
Wu, Baolin; Guan, Weihua
2015-01-01
Summary Acar and Sun (2013, Biometrics, 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. PMID:25351417
Wu, Baolin; Guan, Weihua
2015-06-01
Acar and Sun (2013, Biometrics 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. © 2014, The International Biometric Society.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Parton shower and NLO-matching uncertainties in Higgs boson pair production
NASA Astrophysics Data System (ADS)
Jones, Stephen; Kuttimalai, Silvan
2018-02-01
We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. We observe large uncertainties even in regions of phase space where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
NASA Technical Reports Server (NTRS)
Wu, Dongliang L.
2017-01-01
Clouds, ice clouds in particular, are a major source of uncertainty in climate models. Submm-wave sensors fill the sensitivity gap between MW and IR.Cloud microphysical properties (particle size and shape) account for large (200 and 40) measurement uncertainty.
Management applications of discontinuity theory
1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany
Cebulla, Felix; Haas, Jannik; Eichman, Josh; ...
2018-02-03
Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less
How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebulla, Felix; Haas, Jannik; Eichman, Josh
Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less
McBride, Marissa F; Wilson, Kerrie A; Bode, Michael; Possingham, Hugh P
2007-12-01
Uncertainty in the implementation and outcomes of conservation actions that is not accounted for leaves conservation plans vulnerable to potential changes in future conditions. We used a decision-theoretic approach to investigate the effects of two types of investment uncertainty on the optimal allocation of global conservation resources for land acquisition in the Mediterranean Basin. We considered uncertainty about (1) whether investment will continue and (2) whether the acquired biodiversity assets are secure, which we termed transaction uncertainty and performance uncertainty, respectively. We also developed and tested the robustness of different rules of thumb for guiding the allocation of conservation resources when these sources of uncertainty exist. In the presence of uncertainty in future investment ability (transaction uncertainty), the optimal strategy was opportunistic, meaning the investment priority should be to act where uncertainty is highest while investment remains possible. When there was a probability that investments would fail (performance uncertainty), the optimal solution became a complex trade-off between the immediate biodiversity benefits of acting in a region and the perceived longevity of the investment. In general, regions were prioritized for investment when they had the greatest performance certainty, even if an alternative region was highly threatened or had higher biodiversity value. The improved performance of rules of thumb when accounting for uncertainty highlights the importance of explicitly incorporating sources of investment uncertainty and evaluating potential conservation investments in the context of their likely long-term success.
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).
Tight finite-key analysis for quantum cryptography
Tomamichel, Marco; Lim, Charles Ci Wen; Gisin, Nicolas; Renner, Renato
2012-01-01
Despite enormous theoretical and experimental progress in quantum cryptography, the security of most current implementations of quantum key distribution is still not rigorously established. One significant problem is that the security of the final key strongly depends on the number, M, of signals exchanged between the legitimate parties. Yet, existing security proofs are often only valid asymptotically, for unrealistically large values of M. Another challenge is that most security proofs are very sensitive to small differences between the physical devices used by the protocol and the theoretical model used to describe them. Here we show that these gaps between theory and experiment can be simultaneously overcome by using a recently developed proof technique based on the uncertainty relation for smooth entropies. PMID:22252558
Tight finite-key analysis for quantum cryptography.
Tomamichel, Marco; Lim, Charles Ci Wen; Gisin, Nicolas; Renner, Renato
2012-01-17
Despite enormous theoretical and experimental progress in quantum cryptography, the security of most current implementations of quantum key distribution is still not rigorously established. One significant problem is that the security of the final key strongly depends on the number, M, of signals exchanged between the legitimate parties. Yet, existing security proofs are often only valid asymptotically, for unrealistically large values of M. Another challenge is that most security proofs are very sensitive to small differences between the physical devices used by the protocol and the theoretical model used to describe them. Here we show that these gaps between theory and experiment can be simultaneously overcome by using a recently developed proof technique based on the uncertainty relation for smooth entropies.
The biogeochemical heterogeneity of tropical forests.
Townsend, Alan R; Asner, Gregory P; Cleveland, Cory C
2008-08-01
Tropical forests are renowned for their biological diversity, but also harbor variable combinations of soil age, chemistry and susceptibility to erosion or tectonic uplift. Here we contend that the combined effects of this biotic and abiotic diversity promote exceptional biogeochemical heterogeneity at multiple scales. At local levels, high plant diversity creates variation in chemical and structural traits that affect plant production, decomposition and nutrient cycling. At regional levels, myriad combinations of soil age, soil chemistry and landscape dynamics create variation and uncertainty in limiting nutrients that do not exist at higher latitudes. The effects of such heterogeneity are not well captured in large-scale estimates of tropical ecosystem function, but we suggest new developments in remote sensing can help bridge the gap.
Zhang, Chunlin; Geng, Xuesong; Wang, Hao; Zhou, Lei; Wang, Boguang
2017-01-01
Atmospheric ammonia (NH 3 ), a common alkaline gas found in air, plays a significant role in atmospheric chemistry, such as in the formation of secondary particles. However, large uncertainties remain in the estimation of ammonia emissions from nonagricultural sources, such as wastewater treatment plants (WWTPs). In this study, the ammonia emission factors from a large WWTP utilizing three typical biological treatment techniques to process wastewater in South China were calculated using the US EPA's WATER9 model with three years of raw sewage measurements and information about the facility. The individual emission factors calculated were 0.15 ± 0.03, 0.24 ± 0.05, 0.29 ± 0.06, and 0.25 ± 0.05 g NH 3 m -3 sewage for the adsorption-biodegradation activated sludge treatment process, the UNITANK process (an upgrade of the sequencing batch reactor activated sludge treatment process), and two slightly different anaerobic-anoxic-oxic treatment processes, respectively. The overall emission factor of the WWTP was 0.24 ± 0.06 g NH 3 m -3 sewage. The pH of the wastewater influent is likely an important factor affecting ammonia emissions, because higher emission factors existed at higher pH values. Based on the ammonia emission factor generated in this study, sewage treatment accounted for approximately 4% of the ammonia emissions for the urban area of South China's Pearl River Delta (PRD) in 2006, which is much less than the value of 34% estimated in previous studies. To reduce the large uncertainty in the estimation of ammonia emissions in China, more field measurements are required. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aerosol profiling using the ceilometer network of the German Meteorological Service
NASA Astrophysics Data System (ADS)
Flentje, H.; Heese, B.; Reichardt, J.; Thomas, W.
2010-08-01
The German Meteorological Service (DWD) operates about 52 lidar ceilometers within its synoptic observations network, covering Germany. These affordable low-power lidar systems provide spatially and temporally high resolved aerosol backscatter profiles which can operationally provide quasi 3-D distributions of particle backscatter intensity. Intentionally designed for cloud height detection, recent significant improvements allow following the development of the boundary layer and to detect denser particle plumes in the free tropospere like volcanic ash, Saharan dust or fire smoke. Thus the network builds a powerful aerosol plume alerting and tracking system. If auxiliary aerosol information is available, the particle backscatter coefficient, the extinction coefficient and even particle mass concentrations may be estimated, with however large uncertainties. Therefore, large synergistic benefit is achieved if the ceilometers are linked to existing lidar networks like EARLINET or integrated into WMO's envisioined Global Aerosol Lidar Observation Network GALION. To this end, we demonstrate the potential and limitations of ceilometer networks by means of three representative aerosol episodes over Europe, namely Sahara dust, Mediterranean fire smoke and, more detailed, the Icelandic Eyjafjoll volcano eruption from mid April 2010 onwards. The DWD (Jenoptik CHM15k) lidar ceilometer network tracked the Eyjafjoll ash layers over Germany and roughly estimated peak extinction coefficients and mass concentrations on 17 April of 4-6(± 2) 10-4 m-1 and 500-750(± 300) μg/m-3, respectively, based on co-located aerosol optical depth, nephelometer (scattering coefficient) and particle mass concentration measurements. Though large, the uncertainties are small enough to let the network suit for example as aviation advisory tool, indicating whether the legal flight ban threshold of presently 2 mg/m3 is imminent to be exceeded.
The permafrost carbon inventory on the Tibetan Plateau: a new evaluation using deep sediment cores
NASA Astrophysics Data System (ADS)
Yang, Y.; Ding, J.; Li, F.; Yang, G.; Chen, L.
2016-12-01
The permafrost organic carbon (OC) stock is of global significance because of its large pool size and potential positive feedback to climate warming. However, due to the lack of systematic field observations and appropriate upscaling methodologies, substantial uncertainties exist in the permafrost OC budget, which limits our understanding on the fate of frozen carbon in a warming world. In particular, the lack of comprehensive estimation of OC stock across alpine permafrost means that the current knowledge on this issue remains incomplete. Here we evaluated the pool size and spatial variations of permafrost OC stock to 3 meters depth on the Tibetan Plateau by combining systematic measurements from a substantial number of pedons (i.e., 342 three-meter-deep cores and 177 50-cm-deep pits) with a machine learning technique (i.e., support vector machine, SVM). We also quantified uncertainties in permafrost carbon budget by conducting Monte Carlo simulation. Our results revealed that the combination of systematic measurements with the SVM model allowed spatially explicit estimates. The OC density (OC amount per unit area, OCD) exhibited a decreasing trend from the southeastern to the northwestern plateau, with the exception that OCD in the swamp meadow was substantially higher than that in surrounding regions. Our results also demonstrated that Tibetan permafrost stored a large amount of OC in the top 3 meters, with the median OC pool size being 15.31 Pg C (interquartile range: 13.03-17.77 Pg C). Of them, 44% occurred in deep layers (i.e., 100-300 cm), close to the proportion observed across the northern circumpolar permafrost region. The large carbon pool size, together with significant permafrost thawing implies a risk of carbon emissions and positive climate feedback across the Tibetan alpine permafrost region.
NASA Astrophysics Data System (ADS)
Tang, S.; Xie, S.; Tang, Q.; Zhang, Y.
2017-12-01
Two types of instruments, the eddy correlation flux measurement system (ECOR) and the energy balance Bowen ratio system (EBBR), are used at the Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site to measure surface latent and sensible fluxes. ECOR and EBBR typically sample different land surface types, and the domain-mean surface fluxes derived from ECOR and EBBR are not always consistent. The uncertainties of the surface fluxes will have impacts on the derived large-scale forcing data and further affect the simulations of single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulation models (LES), especially for the shallow-cumulus clouds which are mainly driven by surface forcing. This study aims to quantify the uncertainties of the large-scale forcing caused by surface turbulence flux measurements and investigate the impacts on cloud simulations using long-term observations from the ARM SGP site.
On seismic resolution of lateral heterogeneity in the Earth's outermost core
NASA Astrophysics Data System (ADS)
Garnero, Edward J.; Helmberger, Donald V.
1995-03-01
Issues concerning resolution of seismically determined outermost core properties are presented with an example from three earthquakes in the Fiji-Tonga region. Travel time behavior of the commonly used family of S mKS waves, which travel as S in the mantle, P in the core, reflecting m - 1 times at the underside of the core-mantle boundary (CMB), are analyzed over a large distance range (125-165°). Data having wavepaths through an area of known D″ heterogeneity (±2%) exhibit systematic anomalies in S mKS differential times. Two-dimensional wave propagation experiments demonstrate how large-scale lower-mantle velocity perturbations can explain long-wavelength behavior of such anomalous S mKS times, though heterogeneity on smaller scales may be responsible for the observed scatter about these trends. If lower-mantle heterogeneity is not properly accounted for in deriving a core model, misfit of the mantle model maps directly into core structure. The existence of outermost core heterogeneity is difficult to resolve at present, owing to uncertainties in global lower-mantle structure. Resolving a one-dimensional chemically stratified outermost core also remains difficult, owing to the same uncertainties. Inclusion of the slowly accruing broadband data should help in this regard. Restricting study to higher multiples of S mKS ( m = 2, 3, 4) can help reduce the effect of mantle heterogeneity, because of the closeness of the mantle legs of the wavepaths. S mKS waves are ideal in providing additional information on the details of lower-mantle heterogeneity.
NASA Astrophysics Data System (ADS)
Zhu, Xudong; Zhuang, Qianlai; Qin, Zhangcai; Glagolev, Mikhail; Song, Lulu
2013-04-01
Methane (CH4) emissions from wetland ecosystems in nothern high latitudes provide a potentially positive feedback to global climate warming. Large uncertainties still remain in estimating wetland CH4 emisions at regional scales. Here we develop a statistical model of CH4 emissions using an artificial neural network (ANN) approach and field observations of CH4 fluxes. Six explanatory variables (air temperature, precipitation, water table depth, soil organic carbon, soil total porosity, and soil pH) are included in the development of ANN models, which are then extrapolated to the northern high latitudes to estimate monthly CH4 emissions from 1990 to 2009. We estimate that the annual wetland CH4 source from the northern high latitudes (north of 45°N) is 48.7 Tg CH4 yr-1 (1 Tg = 1012 g) with an uncertainty range of 44.0 53.7 Tg CH4 yr-1. The estimated wetland CH4 emissions show a large spatial variability over the northern high latitudes, due to variations in hydrology, climate, and soil conditions. Significant interannual and seasonal variations of wetland CH4 emissions exist in the past 2 decades, and the emissions in this period are most sensitive to variations in water table position. To improve future assessment of wetland CH4 dynamics in this region, research priorities should be directed to better characterizing hydrological processes of wetlands, including temporal dynamics of water table position and spatial dynamics of wetland areas.
Phylo.io: Interactive Viewing and Comparison of Large Phylogenetic Trees on the Web.
Robinson, Oscar; Dylus, David; Dessimoz, Christophe
2016-08-01
Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analyzed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualizations. The tool can be freely accessed at http://phylo.io and can easily be embedded in other web servers. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
The 1909 Taipei earthquake: implication for seismic hazard in Taipei
Kanamori, Hiroo; Lee, William H.K.; Ma, Kuo-Fong
2012-01-01
The 1909 April 14 Taiwan earthquake caused significant damage in Taipei. Most of the information on this earthquake available until now is from the written reports on its macro-seismic effects and from seismic station bulletins. In view of the importance of this event for assessing the shaking hazard in the present-day Taipei, we collected historical seismograms and station bulletins of this event and investigated them in conjunction with other seismological data. We compared the observed seismograms with those from recent earthquakes in similar tectonic environments to characterize the 1909 earthquake. Despite the inevitably large uncertainties associated with old data, we conclude that the 1909 Taipei earthquake is a relatively deep (50–100 km) intraplate earthquake that occurred within the subducting Philippine Sea Plate beneath Taipei with an estimated M_W of 7 ± 0.3. Some intraplate events elsewhere in the world are enriched in high-frequency energy and the resulting ground motions can be very strong. Thus, despite its relatively large depth and a moderately large magnitude, it would be prudent to review the safety of the existing structures in Taipei against large intraplate earthquakes like the 1909 Taipei earthquake.
NASA Technical Reports Server (NTRS)
Leblanc, T.; Godin-Beekmann, S.; Payen, Godin-Beekmann; Gabarrot, Franck; vanGijsel, Anne; Bandoro, J.; Sica, R.; Trickl, T.
2012-01-01
The international Network for the Detection of Atmospheric Composition Change (NDACC) is a global network of high-quality, remote-sensing research stations for observing and understanding the physical and chemical state of the Earth atmosphere. As part of NDACC, over 20 ground-based lidar instruments are dedicated to the long-term monitoring of atmospheric composition and to the validation of space-borne measurements of the atmosphere from environmental satellites such as Aura and ENVISAT. One caveat of large networks such as NDACC is the difficulty to archive measurement and analysis information consistently from one research group (or instrument) to another [1][2][3]. Yet the need for consistent definitions has strengthened as datasets of various origin (e.g., satellite and ground-based) are increasingly used for intercomparisons, validation, and ingested together in global assimilation systems.In the framework of the 2010 Call for Proposals by the International Space Science Institute (ISSI) located in Bern, Switzerland, a Team of lidar experts was created to address existing issues in three critical aspects of the NDACC lidar ozone and temperature data retrievals: signal filtering and the vertical filtering of the retrieved profiles, the quantification and propagation of the uncertainties, and the consistent definition and reporting of filtering and uncertainties in the NDACC- archived products. Additional experts from the satellite and global data standards communities complement the team to help address issues specific to the latter aspect.
Investigation of uncertainties of establishment schemes in dynamic global vegetation models
NASA Astrophysics Data System (ADS)
Song, Xiang; Zeng, Xiaodong
2014-01-01
In Dynamic Global Vegetation Models (DGVMs), the establishment of woody vegetation refers to flowering, fertilization, seed production, germination, and the growth of tree seedlings. It determines not only the population densities but also other important ecosystem structural variables. In current DGVMs, establishments of woody plant functional types (PFTs) are assumed to be either the same in the same grid cell, or largely stochastic. We investigated the uncertainties in the competition of establishment among coexisting woody PFTs from three aspects: the dependence of PFT establishments on vegetation states; background establishment; and relative establishment potentials of different PFTs. Sensitivity experiments showed that the dependence of establishment rate on the fractional coverage of a PFT favored the dominant PFT by increasing its share in establishment. While a small background establishment rate had little impact on equilibrium states of the ecosystem, it did change the timescale required for the establishment of alien species in pre-existing forest due to their disadvantage in seed competition during the early stage of invasion. Meanwhile, establishment purely from background (the scheme commonly used in current DGVMs) led to inconsistent behavior in response to the change in PFT specification (e.g., number of PFTs and their specification). Furthermore, the results also indicated that trade-off between individual growth and reproduction/colonization has significant influences on the competition of establishment. Hence, further development of establishment parameterization in DGVMs is essential in reducing the uncertainties in simulations of both ecosystem structures and successions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, Erik W.
The theme of this year’s meeting was “Predictivity: Now and in the Future”. After welcoming remarks, Erik Draeger gave a talk on the NNSA Labs’ history of predictive simulation and the new challenges faced by upcoming architecture changes. He described an example where the volume of analysis data produced by a set of inertial confinement fusion (ICF) simulations on the Trinity machine was too large to store or transfer, and the steps needed to reduce it to a manageable size. He also described the software re-engineering plan for LLNL’s suite of multiphysics codes and physics packages with a new pushmore » toward common components, making collaboration with teams like the CCMSC who already have experience trying to architect complex multiphysics code infrastructure on next-generation architectures all the more important. Phil Smith then gave an overview outlining the goals of the project, namely to accelerate development of new technology in the form of high efficiency carbon capture pulverized coal power generation as well as further optimize existing state of the art designs. He then presented a summary of the Center’s top-down uncertainty quantification approach, in which ultimate target predictivity informs uncertainty targets for lower-level components, and gave data on how close all the different components currently are to their targets. Most components still need an approximately two-fold reduction in uncertainty to hit the ultimate predictivity target, but the current accuracy is already rather impressive.« less
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Economic and environmental costs of regulatory uncertainty for coal-fired power plants.
Patiño-Echeverri, Dalia; Fischbeck, Paul; Kriegler, Elmar
2009-02-01
Uncertainty about the extent and timing of CO2 emissions regulations for the electricity-generating sector exacerbates the difficulty of selecting investment strategies for retrofitting or alternatively replacing existent coal-fired power plants. This may result in inefficient investments imposing economic and environmental costs to society. In this paper, we construct a multiperiod decision model with an embedded multistage stochastic dynamic program minimizing the expected total costs of plant operation, installations, and pollution allowances. We use the model to forecast optimal sequential investment decisions of a power plant operator with and without uncertainty about future CO2 allowance prices. The comparison of the two cases demonstrates that uncertainty on future CO2 emissions regulations might cause significant economic costs and higher air emissions.
Water resources in the twenty-first century; a study of the implications of climate uncertainty
Moss, Marshall E.; Lins, Harry F.
1989-01-01
The interactions of the water resources on and within the surface of the Earth with the atmosphere that surrounds it are exceedingly complex. Increased uncertainty can be attached to the availability of water of usable quality in the 21st century, therefore, because of potential anthropogenic changes in the global climate system. For the U.S. Geological Survey to continue to fulfill its mission with respect to assessing the Nation's water resources, an expanded program to study the hydrologic implications of climate uncertainty will be required. The goal for this program is to develop knowledge and information concerning the potential water-resources implications for the United States of uncertainties in climate that may result from both anthropogenic and natural changes of the Earth's atmosphere. Like most past and current water-resources programs of the Geological Survey, the climate-uncertainty program should be composed of three elements: (1) research, (2) data collection, and (3) interpretive studies. However, unlike most other programs, the climate-uncertainty program necessarily will be dominated by its research component during its early years. Critical new concerns to be addressed by the research component are (1) areal estimates of evapotranspiration, (2) hydrologic resolution within atmospheric (climatic) models at the global scale and at mesoscales, (3) linkages between hydrology and climatology, and (4) methodology for the design of data networks that will help to track the impacts of climate change on water resources. Other ongoing activities in U.S. Geological Survey research programs will be enhanced to make them more compatible with climate-uncertainty research needs. The existing hydrologic data base of the Geological Survey serves as a key element in assessing hydrologic and climatologic change. However, this data base has evolved in response to other needs for hydrologic information and probably is not as sensitive to climate change as is desirable. Therefore, as measurement and network-design methodologies are improved to account for climate-change potential, new data-collection activities will be added to the existing programs. One particular area of data-collection concern pertains to the phenomenon of evapotranspiration. Interpretive studies of the hydrologic implications of climate uncertainty will be initiated by establishing several studies at the river-basin scale in diverse hydroclimatic and demographic settings. These studies will serve as tests of the existing methodologies for studying the impacts of climate change and also will help to define subsequent research priorities. A prototype for these studies was initiated in early 1988 in the Delaware River basin.
NASA Astrophysics Data System (ADS)
Harvey, Richard Paul, III
Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
NASA Astrophysics Data System (ADS)
Li, K. Betty; Goovaerts, Pierre; Abriola, Linda M.
2007-06-01
Contaminant mass discharge across a control plane downstream of a dense nonaqueous phase liquid (DNAPL) source zone has great potential to serve as a metric for the assessment of the effectiveness of source zone treatment technologies and for the development of risk-based source-plume remediation strategies. However, too often the uncertainty of mass discharge estimated in the field is not accounted for in the analysis. In this paper, a geostatistical approach is proposed to estimate mass discharge and to quantify its associated uncertainty using multilevel transect measurements of contaminant concentration (C) and hydraulic conductivity (K). The approach adapts the p-field simulation algorithm to propagate and upscale the uncertainty of mass discharge from the local uncertainty models of C and K. Application of this methodology to numerically simulated transects shows that, with a regular sampling pattern, geostatistics can provide an accurate model of uncertainty for the transects that are associated with low levels of source mass removal (i.e., transects that have a large percentage of contaminated area). For high levels of mass removal (i.e., transects with a few hot spots and large areas of near-zero concentration), a total sampling area equivalent to 6˜7% of the transect is required to achieve accurate uncertainty modeling. A comparison of the results for different measurement supports indicates that samples taken with longer screen lengths may lead to less accurate models of mass discharge uncertainty. The quantification of mass discharge uncertainty, in the form of a probability distribution, will facilitate risk assessment associated with various remediation strategies.
Missing Link: Bayesian detection and measurement of intermediate-mass black-hole binaries
NASA Astrophysics Data System (ADS)
Graff, Philip B.; Buonanno, Alessandra; Sathyaprakash, B. S.
2015-07-01
We perform Bayesian analysis of gravitational-wave signals from nonspinning, intermediate-mass black-hole binaries (IMBHBs) with observed total mass, Mobs, from 50 M⊙ to 500 M⊙ and mass ratio 1-4 using advanced LIGO and Virgo detectors. We employ inspiral-merger-ringdown waveform models based on the effective-one-body formalism and include subleading modes of radiation beyond the leading (2,2) mode. The presence of subleading modes increases signal power for inclined binaries and allows for improved accuracy and precision in measurements of the masses as well as breaking of degeneracies in distance, orientation and polarization. For low total masses, Mobs≲50 M⊙ , for which the inspiral signal dominates, the observed chirp mass Mobs=Mobsη3 /5 (η being the symmetric mass ratio) is better measured. In contrast, as increasing power comes from merger and ringdown, we find that the total mass Mobs has better relative precision than Mobs. Indeed, at high Mobs (≥300 M⊙ ), the signal resembles a burst and the measurement thus extracts the dominant frequency of the signal that depends on Mobs. Depending on the binary's inclination, at signal-to-noise ratio (SNR) of 12, uncertainties in Mobs can be as large as ˜20 - 25 % while uncertainties in Mobs are ˜50 - 60 % in binaries with unequal masses (those numbers become ˜17 % vs. ˜22 % in more symmetric mass-ratio binaries). Although large, those uncertainties in Mobs will establish the existence of IMBHs. We find that effective-one-body waveforms with subleading modes are essential to confirm a signal's presence in the data, with calculated Bayesian evidences yielding a false alarm probability below 10-5 for SNR ≳9 in Gaussian noise. Our results show that gravitational-wave observations can offer a unique tool to observe and understand the formation, evolution and demographics of IMBHs, which are difficult to observe in the electromagnetic window.
Evaluation of Uncertainty in Bedload Transport Estimates in a Southern Appalachian Stream
NASA Astrophysics Data System (ADS)
Schwartz, J. S.
2016-12-01
Capacity estimates of bed-material transport rates are generally derived using empirical formulae as a function of bed material gradation and composition, and hydraulic shear stress. Various field techniques may be used to sample and characterize bed material gradation; some techniques assume the existing bar material is representative of that in transport. Other methods use Helly-Smith samplers, pit traps, and net traps. Very few large, complete cross-section pit traps fully instrumented to collect continuous bedload transport have been constructed, and none in the eastern United States to our knowledge. A fully-instrumented bedload collection station was constructed on Little Turkey Creek (LTC) in Farragut, Tennessee. The aim of the research was to characterize bed material transport during stormflows for a southern Appalachian stream in the Ridge and Valley Providence. Bedload transport data from LTC was compared with classic datasets including Oak Creek (Oregon), East Fork River (Wyoming), and Clearwater and Snake rivers (Idaho). In addition, data were evaluated to assess the potential accuracy of both calibrated and uncalibrated bedload transport models using bedload transport data from LTC. Uncalibrated models were assessed with regard to their estimated range of uncertainty according to Monte Carlo uncertainty analyses. Models calibrated using reference shear values determined according to station measurements are evaluated in the same manner. Finally, models calibrated using the small scale, short-term, low rate bedload sampling techniques promoted in the literature for the spreadsheet based Bedload Assessment in Gravel-bedded Streams (BAGS) software for determining the reference shear stress are compared to results of both uncalibrated models and those calibrated using data from the bedload station. This research supports design and construction of dynamically stable alluvial stream restoration projects where stream channels are largely dependent on reach-scale hydraulic geometry that provides a long-term balance between bed-material sediment supply and transport capacity.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
Parton shower and NLO-matching uncertainties in Higgs boson pair production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Stephen; Kuttimalai, Silvan
We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. Here, we observe large uncertainties even in regions of phase spacemore » where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.« less
Parton shower and NLO-matching uncertainties in Higgs boson pair production
Jones, Stephen; Kuttimalai, Silvan
2018-02-28
We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. Here, we observe large uncertainties even in regions of phase spacemore » where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature.« less
How predictable is the timing of a summer ice-free Arctic?
NASA Astrophysics Data System (ADS)
Jahn, Alexandra; Kay, Jennifer E.; Holland, Marika M.; Hall, David M.
2016-09-01
Climate model simulations give a large range of over 100 years for predictions of when the Arctic could first become ice free in the summer, and many studies have attempted to narrow this uncertainty range. However, given the chaotic nature of the climate system, what amount of spread in the prediction of an ice-free summer Arctic is inevitable? Based on results from large ensemble simulations with the Community Earth System Model, we show that internal variability alone leads to a prediction uncertainty of about two decades, while scenario uncertainty between the strong (Representative Concentration Pathway (RCP) 8.5) and medium (RCP4.5) forcing scenarios adds at least another 5 years. Common metrics of the past and present mean sea ice state (such as ice extent, volume, and thickness) as well as global mean temperatures do not allow a reduction of the prediction uncertainty from internal variability.
Prioritizing Chemicals and Data Requirements for Screening-Level Exposure and Risk Assessment
Brown, Trevor N.; Wania, Frank; Breivik, Knut; McLachlan, Michael S.
2012-01-01
Background: Scientists and regulatory agencies strive to identify chemicals that may cause harmful effects to humans and the environment; however, prioritization is challenging because of the large number of chemicals requiring evaluation and limited data and resources. Objectives: We aimed to prioritize chemicals for exposure and exposure potential and obtain a quantitative perspective on research needs to better address uncertainty in screening assessments. Methods: We used a multimedia mass balance model to prioritize > 12,000 organic chemicals using four far-field human exposure metrics. The propagation of variance (uncertainty) in key chemical information used as model input for calculating exposure metrics was quantified. Results: Modeled human concentrations and intake rates span approximately 17 and 15 orders of magnitude, respectively. Estimates of exposure potential using human concentrations and a unit emission rate span approximately 13 orders of magnitude, and intake fractions span 7 orders of magnitude. The actual chemical emission rate contributes the greatest variance (uncertainty) in exposure estimates. The human biotransformation half-life is the second greatest source of uncertainty in estimated concentrations. In general, biotransformation and biodegradation half-lives are greater sources of uncertainty in modeled exposure and exposure potential than chemical partition coefficients. Conclusions: Mechanistic exposure modeling is suitable for screening and prioritizing large numbers of chemicals. By including uncertainty analysis and uncertainty in chemical information in the exposure estimates, these methods can help identify and address the important sources of uncertainty in human exposure and risk assessment in a systematic manner. PMID:23008278
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.
[The metrology of uncertainty: a study of vital statistics from Chile and Brazil].
Carvajal, Yuri; Kottow, Miguel
2012-11-01
This paper addresses the issue of uncertainty in the measurements used in public health analysis and decision-making. The Shannon-Wiener entropy measure was adapted to express the uncertainty contained in counting causes of death in official vital statistics from Chile. Based on the findings, the authors conclude that metrological requirements in public health are as important as the measurements themselves. The study also considers and argues for the existence of uncertainty associated with the statistics' performative properties, both by the way the data are structured as a sort of syntax of reality and by exclusion of what remains beyond the quantitative modeling used in each case. Following the legacy of pragmatic thinking and using conceptual tools from the sociology of translation, the authors emphasize that by taking uncertainty into account, public health can contribute to a discussion on the relationship between technology, democracy, and formation of a participatory public.
On entropic uncertainty relations in the presence of a minimal length
NASA Astrophysics Data System (ADS)
Rastegin, Alexey E.
2017-07-01
Entropic uncertainty relations for the position and momentum within the generalized uncertainty principle are examined. Studies of this principle are motivated by the existence of a minimal observable length. Then the position and momentum operators satisfy the modified commutation relation, for which more than one algebraic representation is known. One of them is described by auxiliary momentum so that the momentum and coordinate wave functions are connected by the Fourier transform. However, the probability density functions of the physically true and auxiliary momenta are different. As the corresponding entropies differ, known entropic uncertainty relations are changed. Using differential Shannon entropies, we give a state-dependent formulation with correction term. State-independent uncertainty relations are obtained in terms of the Rényi entropies and the Tsallis entropies with binning. Such relations allow one to take into account a finiteness of measurement resolution.
NASA Astrophysics Data System (ADS)
Zhang, X.; Lee, X.; Griffis, T. J.; Baker, J. M.
2014-12-01
Although agriculture accounts for about 80% of the global anthropogenic nitrous oxide (N2O) emissions, large uncertainties exist in regional inventories of N2O emissions from agriculture. The uncertainties mainly include poorly quantified plant flux, large heterogeneity of direct N2O emissions from cropland, and underestimated N2O lost through leaching and run off. To evaluate these uncertainties we conducted observations on three contrasting scales in the Midwest U.S., an agriculture dominated region (Zhang et al., 2014a). Observations at the plant, ecosystem, and regional scales include: 1) N2O flux measurements from the aboveground section of corn and soybean plants using newly designed plant chamber; 2) N2O flux-gradient measurements in a soybean-corn rotation field; and 3) N2O concentration measurements at 3 m and 200 m level on a communication tower (KCMP tower, 44°41'19''N, 93°4'22''W) that were used to estimate regional N2O fluxes with boundary layer methods (Zhang et al., 2014b). With these observations we evaluated the uncertainties in two frequently-used N2O inventories: EDGAR42 (Emission Database for Global Atmospheric Research, release version 4.2); and a national GHG inventory (U.S. EPA, 2014). The results indicate that EDGAR42 and EPA inventory underestimated N2O emissions for the region around the KCMP tower at least by a factor of three and two respectively. The underestimation is not likely caused by neglecting N2O flux from crops since N2O fluxes from unfertilized soybean and fertilized corn plants were about one magnitude lower than N2O emissions from the soil-plant ecosystem. The direct N2O emissions from cropland accounted for less than 20% of the regional flux, suggesting a significant influence by other sources and indirect emissions in the regional N2O budget. ReferencesU.S. EPA (2014) Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990-2012, 529 pp., Washington, D.C.. X Zhang, X Lee, TJ Griffis, AE Andrews, JM Baker, MD Erickson, W Xiao, N Hu (2014 a) Quantifying nitrous oxide fluxes on multiple spatial scales in the Upper Midwest, USA, Int J Biometeorol. X Zhang, X Lee, TJ Griffis, JM Baker, W Xiao (2014 b) Estimating greenhouse gas fluxes from an agriculture-dominated landscape using multiple planetary boundary layer methods, Atmos Chem Phys Discuss.
Seismotectonic framework of the 2010 February 27 Mw 8.8 Maule, Chile earthquake sequence
Hayes, Gavin P.; Bergman, Eric; Johnson, Kendra J.; Benz, Harley M.; Brown, Lucy; Meltzer, Anne S.
2013-01-01
After the 2010 Mw 8.8 Maule earthquake, an international collaboration involving teams and instruments from Chile, the US, the UK, France and Germany established the International Maule Aftershock Deployment temporary network over the source region of the event to facilitate detailed, open-access studies of the aftershock sequence. Using data from the first 9-months of this deployment, we have analyzed the detailed spatial distribution of over 2500 well-recorded aftershocks. All earthquakes have been relocated using a hypocentral decomposition algorithm to study the details of and uncertainties in both their relative and absolute locations. We have computed regional moment tensor solutions for the largest of these events to produce a catalogue of 465 mechanisms, and have used all of these data to study the spatial distribution of the aftershock sequence with respect to the Chilean megathrust. We refine models of co-seismic slip distribution of the Maule earthquake, and show how small changes in fault geometries assumed in teleseismic finite fault modelling significantly improve fits to regional GPS data, implying that the accuracy of rapid teleseismic fault models can be substantially improved by consideration of existing fault geometry model databases. We interpret all of these data in an integrated seismotectonic framework for the Maule earthquake rupture and its aftershock sequence, and discuss the relationships between co-seismic rupture and aftershock distributions. While the majority of aftershocks are interplate thrust events located away from regions of maximum co-seismic slip, interesting clusters of aftershocks are identified in the lower plate at both ends of the main shock rupture, implying internal deformation of the slab in response to large slip on the plate boundary interface. We also perform Coulomb stress transfer calculations to compare aftershock locations and mechanisms to static stress changes following the Maule rupture. Without the incorporation of uncertainties in earthquake locations, just 55 per cent of aftershock nodal planes align with faults promoted towards failure by co-seismic slip. When epicentral uncertainties are considered (on the order of just ±2–3 km), 90 per cent of aftershocks are consistent with occurring along faults demonstrating positive stress transfer. These results imply large sensitivities of Coulomb stress transfer calculations to uncertainties in both earthquake locations and models of slip distributions, particularly when applied to aftershocks close to a heterogeneous fault rupture; such uncertainties should therefore be considered in similar studies used to argue for or against models of static stress triggering.
Status Update on the GPM Ground Validation Iowa Flood Studies (IFloodS) Field Experiment
NASA Astrophysics Data System (ADS)
Petersen, Walt; Krajewski, Witold
2013-04-01
The overarching objective of integrated hydrologic ground validation activities supporting the Global Precipitation Measurement Mission (GPM) is to provide better understanding of the strengths and limitations of the satellite products, in the context of hydrologic applications. To this end, the GPM Ground Validation (GV) program is conducting the first of several hydrology-oriented field efforts: the Iowa Flood Studies (IFloodS) experiment. IFloodS will be conducted in the central to northeastern part of Iowa in Midwestern United States during the months of April-June, 2013. Specific science objectives and related goals for the IFloodS experiment can be summarized as follows: 1. Quantify the physical characteristics and space/time variability of rain (rates, DSD, process/"regime") and map to satellite rainfall retrieval uncertainty. 2. Assess satellite rainfall retrieval uncertainties at instantaneous to daily time scales and evaluate propagation/impact of uncertainty in flood-prediction. 3. Assess hydrologic predictive skill as a function of space/time scales, basin morphology, and land use/cover. 4. Discern the relative roles of rainfall quantities such as rate and accumulation as compared to other factors (e.g. transport of water in the drainage network) in flood genesis. 5. Refine approaches to "integrated hydrologic GV" concept based on IFloodS experiences and apply to future GPM Integrated GV field efforts. These objectives will be achieved via the deployment of the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms with attendant soil moisture and temperature probes, a large network of both 2D Video and Parsivel disdrometers, and USDA-ARS gauge and soil-moisture measurements (in collaboration with the NASA SMAP mission). The aforementioned measurements will be used to complement existing operational WSR-88D S-band polarimetric radar measurements, USGS streamflow, and Iowa Flood Center stream monitoring measurements. Coincident satellite datasets will be archived from current microwave imaging and sounding radiometers flying on NOAA, DMSP, NASA, and EU (METOP) low-earth orbiters, and rapid-scanned IR datasets collected from geostationary (GOES) platforms. Collectively the observational assets will provide a means to create high quality (time and space sampling) ground "reference" rainfall and stream flow datasets. The ground reference radar and rainfall datasets will provide a means to assess uncertainties in both satellite algorithms (physics) and products. Subsequently, the impact of uncertainties in the satellite products can be evaluated in coupled weather, land-surface and distributed hydrologic modeling frameworks as related to flood prediction.
A crustal seismic velocity model for the UK, Ireland and surrounding seas
Kelly, A.; England, R.W.; Maguire, Peter K.H.
2007-01-01
A regional model of the 3-D variation in seismic P-wave velocity structure in the crust of NW Europe has been compiled from wide-angle reflection/refraction profiles. Along each 2-D profile a velocity-depth function has been digitised at 5 km intervals. These 1-D velocity functions were mapped into three dimensions using ordinary kriging with weights determined to minimise the difference between digitised and interpolated values. An analysis of variograms of the digitised data suggested a radial isotropic weighting scheme was most appropriate. Horizontal dimensions of the model cells are optimised at 40 ?? 40 km and the vertical dimension at 1 km. The resulting model provides a higher resolution image of the 3-D variation in seismic velocity structure of the UK, Ireland and surrounding areas than existing models. The construction of the model through kriging allows the uncertainty in the velocity structure to be assessed. This uncertainty indicates the high density of data required to confidently interpolate the crustal velocity structure, and shows that for this region the velocity is poorly constrained for large areas away from the input data. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-04-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We show how to estimate the statistical uncertainty given the output of just a single radiative-transfer simulation in which the number of photon packets follows a Poisson distribution and the weight (e.g. energy or luminosity) of a single packet may follow an arbitrary distribution. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalise existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-07-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao
2018-03-01
Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.
NASA Astrophysics Data System (ADS)
Hughes, Anna; Gyllencreutz, Richard; Mangerud, Jan; Svendsen, John Inge
2017-04-01
Glacial geologists generate empirical reconstructions of former ice-sheet dynamics by combining evidence from the preserved record of glacial landforms (e.g. end moraines, lineations) and sediments with chronological evidence (mainly numerical dates derived predominantly from radiocarbon, exposure and luminescence techniques). However the geomorphological and sedimentological footprints and chronological data are both incomplete records in both space and time, and all have multiple types of uncertainty associated with them. To understand ice sheets' response to climate we need numerical models of ice-sheet dynamics based on physical principles. To test and/or constrain such models, empirical reconstructions of past ice sheets that capture and acknowledge all uncertainties are required. In 2005 we started a project (Database of the Eurasian Deglaciation, DATED) to produce an empirical reconstruction of the evolution of the last Eurasian ice sheets, (including the British-Irish, Scandinavian and Svalbard-Barents-Kara Seas ice sheets) that is fully documented, specified in time, and includes uncertainty estimates. Over 5000 dates relevant to constraining ice build-up and retreat were assessed for reliability and used together with published ice-sheet margin positions based on glacial geomorphology to reconstruct time-slice maps of the ice sheets' extent. The DATED maps show synchronous ice margins with maximum-minimum uncertainty bounds for every 1000 years between 25-10 kyr ago. In the first version of results (DATED-1; Hughes et al. 2016) all uncertainties (both quantitative and qualitative, e.g. precision and accuracy of numerical dates, correlation of moraines, stratigraphic interpretations) were combined based on our best glaciological-geological assessment and expressed in terms of distance as a 'fuzzy' margin. Large uncertainties (>100 km) exist; predominantly across marine sectors and other locations where there are spatial gaps in the dating record (e.g. the timing of coalescence and separation of the Scandinavian and Svalbard-Barents-Kara ice sheets) but also in well-studied areas due to conflicting yet apparently equally robust data. In the four years since the DATED-1 census (1 January 2013), the volume of new information (from both dates and mapped glacial geomorphology) has grown significantly ( 1000 new dates). Here, we present work towards the updated version of results, DATED-2, that attempts to further reduce and explicitly report all uncertainties inherent in ice sheet reconstructions. Hughes, A. L. C., Gyllencreutz, R., Lohne, Ø. S., Mangerud, J., Svendsen, J. I. 2016: The last Eurasian ice sheets - a chronological database and time-slice reconstruction, DATED-1. Boreas, 45, 1-45. 10.1111/bor.12142
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
Model structures amplify uncertainty in predicted soil carbon responses to climate change.
Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien
2018-06-04
Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.
Transfer Standard Uncertainty Can Cause Inconclusive Inter-Laboratory Comparisons
Wright, John; Toman, Blaza; Mickan, Bodo; Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2016-01-01
Inter-laboratory comparisons use the best available transfer standards to check the participants’ uncertainty analyses, identify underestimated uncertainty claims or unknown measurement biases, and improve the global measurement system. For some measurands, instability of the transfer standard can lead to an inconclusive comparison result. If the transfer standard uncertainty is large relative to a participating laboratory’s uncertainty, the commonly used standardized degree of equivalence ≤ 1 criterion does not always correctly assess whether a participant is working within their uncertainty claims. We show comparison results that demonstrate this issue and propose several criteria for assessing a comparison result as passing, failing, or inconclusive. We investigate the behavior of the standardized degree of equivalence and alternative comparison measures for a range of values of the transfer standard uncertainty relative to the individual laboratory uncertainty values. The proposed alternative criteria successfully discerned between passing, failing, and inconclusive comparison results for the cases we examined. PMID:28090123
Holistic uncertainty analysis in river basin modeling for climate vulnerability assessment
NASA Astrophysics Data System (ADS)
Taner, M. U.; Wi, S.; Brown, C.
2017-12-01
The challenges posed by uncertain future climate are a prominent concern for water resources managers. A number of frameworks exist for assessing the impacts of climate-related uncertainty, including internal climate variability and anthropogenic climate change, such as scenario-based approaches and vulnerability-based approaches. While in many cases climate uncertainty may be dominant, other factors such as future evolution of the river basin, hydrologic response and reservoir operations are potentially significant sources of uncertainty. While uncertainty associated with modeling hydrologic response has received attention, very little attention has focused on the range of uncertainty and possible effects of the water resources infrastructure and management. This work presents a holistic framework that allows analysis of climate, hydrologic and water management uncertainty in water resources systems analysis with the aid of a water system model designed to integrate component models for hydrology processes and water management activities. The uncertainties explored include those associated with climate variability and change, hydrologic model parameters, and water system operation rules. A Bayesian framework is used to quantify and model the uncertainties at each modeling steps in integrated fashion, including prior and the likelihood information about model parameters. The framework is demonstrated in a case study for the St. Croix Basin located at border of United States and Canada.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele
2016-06-20
Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
NASA Technical Reports Server (NTRS)
Anderson, Leif; Carter-Journet, Katrina; Box, Neil; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
This paper introduces an analytical approach, Probability and Confidence Trade-space (PACT), which can be used to assess uncertainty in International Space Station (ISS) hardware sparing necessary to extend the life of the vehicle. There are several key areas under consideration in this research. We investigate what sparing confidence targets may be reasonable to ensure vehicle survivability and for completion of science on the ISS. The results of the analysis will provide a methodological basis for reassessing vehicle subsystem confidence targets. An ongoing annual analysis currently compares the probability of existing spares exceeding the total expected unit demand of the Orbital Replacement Unit (ORU) in functional hierarchies approximating the vehicle subsystems. In cases where the functional hierarchies availability does not meet subsystem confidence targets, the current sparing analysis further identifies which ORUs may require additional spares to extend the life of the ISS. The resulting probability is dependent upon hardware reliability estimates. However, the ISS hardware fleet carries considerable epistemic uncertainty (uncertainty in the knowledge of the true hardware failure rate), which does not currently factor into the annual sparing analysis. The existing confidence targets may be conservative. This paper will also discuss how confidence targets may be relaxed based on the inclusion of epistemic uncertainty for each ORU. The paper will conclude with strengths and limitations for implementing the analytical approach in sustaining the ISS through end of life, 2020 and beyond.
Modeling for waste management associated with environmental-impact abatement under uncertainty.
Li, P; Li, Y P; Huang, G H; Zhang, J L
2015-04-01
Municipal solid waste (MSW) treatment can generate significant amounts of pollutants, and thus pose a risk on human health. Besides, in MSW management, various uncertainties exist in the related costs, impact factors, and objectives, which can affect the optimization processes and the decision schemes generated. In this study, a life cycle assessment-based interval-parameter programming (LCA-IPP) method is developed for MSW management associated with environmental-impact abatement under uncertainty. The LCA-IPP can effectively examine the environmental consequences based on a number of environmental impact categories (i.e., greenhouse gas equivalent, acid gas emissions, and respiratory inorganics), through analyzing each life cycle stage and/or major contributing process related to various MSW management activities. It can also tackle uncertainties existed in the related costs, impact factors, and objectives and expressed as interval numbers. Then, the LCA-IPP method is applied to MSW management for the City of Beijing, the capital of China, where energy consumptions and six environmental parameters [i.e., CO2, CO, CH4, NOX, SO2, inhalable particle (PM10)] are used as systematic tool to quantify environmental releases in entire life cycle stage of waste collection, transportation, treatment, and disposal of. Results associated with system cost, environmental impact, and the related policy implication are generated and analyzed. Results can help identify desired alternatives for managing MSW flows, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty.
NASA Astrophysics Data System (ADS)
Siddique, Sami; Jaffray, David
2007-03-01
A central purpose of image-guidance is to assist the interventionalist with feedback of geometric performance in the direction of therapy delivery. Tradeoffs exist between accuracy, precision and the constraints imposed by parameters used in the generation of images. A framework that uses geometric performance as feedback to control these parameters can balance such tradeoffs in order to maintain the requisite localization precision for a given clinical procedure. We refer to this principle as Active Image-Guidance (AIG). This framework requires estimates of the uncertainty in the estimated location of the object of interest. In this study, a simple fiducial marker detected under X-ray fluoroscopy is considered and it is shown that a relation exists between the applied imaging dose and the uncertainty in localization for a given observer. A robust estimator of the location of a fiducial in the thorax during respiration under X-ray fluoroscopy is demonstrated using a particle filter based approach that outputs estimates of the location and the associated spatial uncertainty. This approach gives an rmse of 1.3mm and the uncertainty estimates are found to be correlated with the error in the estimates. Furthermore, the particle filtering approach is employed to output location estimates and the associated uncertainty not only at instances of pulsed exposure but also between exposures. Such a system has applications in image-guided interventions (surgery, radiotherapy, interventional radiology) where there are latencies between the moment of imaging and the act of intervention.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Reda, I.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Presentation of uncertainties on web platforms for climate change information
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Wrobel, Markus; Reusser, Dominik
2014-05-01
Climate research has a long tradition, however there is still uncertainty about the specific effects of climate change. One of the key tasks is - beyond discussing climate change and its impacts in specialist groups - to present these to a wider audience. In that respect, decision-makers in the public sector as well as directly affected professional groups require to obtain easy-to-understand information. These groups are not made up of specialist scientists. This gives rise to the challenge that the scientific information must be presented such that it is commonly understood, however, the complexity of the science behind needs to be incorporated. In particular, this requires the explicit representation of spatial and temporal uncertainty information to lay people. Within this talk/poster we survey how climate change and climate impact uncertainty information is presented on various climate service web-based platforms. We outline how the specifics of this medium make it challenging to find adequate and readable representations of uncertainties. First, we introduce a multi-step approach in communicating the uncertainty basing on a typology of uncertainty distinguishing between epistemic, natural stochastic, and human reflexive uncertainty. Then, we compare existing concepts and representations for uncertainty communication with current practices on web-based platforms, including own solutions within our web platforms ClimateImpactsOnline and ci:grasp. Finally, we review surveys on how spatial uncertainty visualization techniques are conceived by untrainded users.
Fischer, Andreas
2016-11-01
Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.
Uncertainty visualisation in the Model Web
NASA Astrophysics Data System (ADS)
Gerharz, L. E.; Autermann, C.; Hopmann, H.; Stasch, C.; Pebesma, E.
2012-04-01
Visualisation of geospatial data as maps is a common way to communicate spatially distributed information. If temporal and furthermore uncertainty information are included in the data, efficient visualisation methods are required. For uncertain spatial and spatio-temporal data, numerous visualisation methods have been developed and proposed, but only few tools for visualisation of data in a standardised way exist. Furthermore, usually they are realised as thick clients, and lack functionality of handling data coming from web services as it is envisaged in the Model Web. We present an interactive web tool for visualisation of uncertain spatio-temporal data developed in the UncertWeb project. The client is based on the OpenLayers JavaScript library. OpenLayers provides standard map windows and navigation tools, i.e. pan, zoom in/out, to allow interactive control for the user. Further interactive methods are implemented using jStat, a JavaScript library for statistics plots developed in UncertWeb, and flot. To integrate the uncertainty information into existing standards for geospatial data, the Uncertainty Markup Language (UncertML) was applied in combination with OGC Observations&Measurements 2.0 and JavaScript Object Notation (JSON) encodings for vector and NetCDF for raster data. The client offers methods to visualise uncertain vector and raster data with temporal information. Uncertainty information considered for the tool are probabilistic and quantified attribute uncertainties which can be provided as realisations or samples, full probability distributions functions and statistics. Visualisation is supported for uncertain continuous and categorical data. In the client, the visualisation is realised using a combination of different methods. Based on previously conducted usability studies, a differentiation between expert (in statistics or mapping) and non-expert users has been indicated as useful. Therefore, two different modes are realised together in the tool: (i) adjacent maps showing data and uncertainty separately, and (ii) multidimensional mapping providing different visualisation methods in combination to explore the spatial, temporal and uncertainty distribution of the data. Adjacent maps allow a simpler visualisation by separating value and uncertainty maps for non-experts and a first overview. The multidimensional approach allows a more complex exploration of the data for experts by browsing through the different dimensions. It offers the visualisation of maps, statistic plots and time series in different windows and sliders to interactively move through time, space and uncertainty (thresholds).
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
NASA Technical Reports Server (NTRS)
Pierce, J.; Diaz-Barrios, M.; Pinzon, J.; Ustin, S. L.; Shih, P.; Tournois, S.; Zarco-Tejada, P. J.; Vanderbilt, V. C.; Perry, G. L.; Brass, James A. (Technical Monitor)
2002-01-01
This study used Support Vector Machines to classify multiangle POLDER data. Boreal wetland ecosystems cover an estimated 90 x 10(exp 6) ha, about 36% of global wetlands, and are a major source of trace gases emissions to the atmosphere. Four to 20 percent of the global emission of methane to the atmosphere comes from wetlands north of 4 degrees N latitude. Large uncertainties in emissions exist because of large spatial and temporal variation in the production and consumption of methane. Accurate knowledge of the areal extent of open water and inundated vegetation is critical to estimating magnitudes of trace gas emissions. Improvements in land cover mapping have been sought using physical-modeling approaches, neural networks, and active microwave, examples that demonstrate the difficulties of separating open water, inundated vegetation and dry upland vegetation. Here we examine the feasibility of using a support vector machine to classify POLDER data representing open water, inundated vegetation and dry upland vegetation.
Why, which, how, who, when? A personal view of smallpox vaccination for the 2000s.
Mortimer, P P
2004-06-01
The uncertainty about the extent of proliferation of smallpox virus holdings since the early 1990s, and particularly whether terrorist groups or so-called rogue states might now hold the virus, confronts potential target countries with a continuing dilemma. An increasingly large majority of their populations have never been vaccinated, and those who have been vaccinated may have become susceptible to smallpox again. Yet recent attempts by the United States and other governments to persuade large numbers of key personnel and others to accept vaccination have at least partially failed and a different long-term strategy is needed. This strategy should be based on surveillance of rash illnesses, improved public education, more refined contingency planning and a new approach to smallpox vaccination. The last should if possible be based on cell-grown, less reactogenic vaccines, even though it may be some years before these can become available. Meanwhile this article examines other expedients including the use of existing lymph vaccines.
Evaluating Air-Quality Models: Review and Outlook.
NASA Astrophysics Data System (ADS)
Weil, J. C.; Sykes, R. I.; Venkatram, A.
1992-10-01
Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.
Exploring entropic uncertainty relation in the Heisenberg XX model with inhomogeneous magnetic field
NASA Astrophysics Data System (ADS)
Huang, Ai-Jun; Wang, Dong; Wang, Jia-Ming; Shi, Jia-Dong; Sun, Wen-Yang; Ye, Liu
2017-08-01
In this work, we investigate the quantum-memory-assisted entropic uncertainty relation in a two-qubit Heisenberg XX model with inhomogeneous magnetic field. It has been found that larger coupling strength J between the two spin-chain qubits can effectively reduce the entropic uncertainty. Besides, we observe the mechanics of how the inhomogeneous field influences the uncertainty, and find out that when the inhomogeneous field parameter b<1, the uncertainty will decrease with the decrease of the inhomogeneous field parameter b, conversely, the uncertainty will increase with decreasing b under the condition that b>1. Intriguingly, the entropic uncertainty can shrink to zero when the coupling coefficients are relatively large, while the entropic uncertainty only reduces to 1 with the increase of the homogeneous magnetic field. Additionally, we observe the purity of the state and Bell non-locality and obtain that the entropic uncertainty is anticorrelated with both the purity and Bell non-locality of the evolution state.
Climate change adaptation and Integrated Water Resource Management in the water sector
NASA Astrophysics Data System (ADS)
Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim
2014-10-01
Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range of future scenarios in order to develop robust adaptation measures and strategies.
NASA Astrophysics Data System (ADS)
Tinti, Stefano; Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo
2013-04-01
Geoscientists deal often with hazardous processes like earthquakes, volcanic eruptions, tsunamis, hurricanes, etc., and their research is aimed not only to a better understanding of the physical processes, but also to provide assessment of the space and temporal evolution of a given individual event (i.e. to provide short-term prediction) and of the expected evolution of a group of events (i.e. to provide statistical estimates referred to a given return period, and a given geographical area). One of the main issues of any scientific method is how to cope with measurement errors, a topic which in case of forecast of ongoing or of future events translates into how to deal with forecast uncertainties. In general, the more data are available and processed to make a prediction, the more accurate the prediction is expected to be if the scientific approach is sound, and the smaller the associated uncertainties are. However, there are several important cases where assessment is to be made with insufficient data or insufficient time for processing, which leads to large uncertainties. Two examples can be given taken from tsunami science, since tsunamis are rare events that may have destructive power and very large impact. One example is the case of warning for a tsunami generated by a near-coast earthquake, which is an issue at the focus of the European funded project NearToWarn. Warning has to be launched before tsunami hits the coast, that is in a few minutes after its generation. This may imply that data collected in such a short time are not yet enough for an accurate evaluation, also because the implemented monitoring system (if any) could be inadequate (f.i. one reason of inadequacy could be that implementing a dense instrumental network could be judged too expensive for rare events) The second case is the long term prevention from tsunami strikes. Tsunami infrequency may imply that the historical record for a given piece of coast is too short to capture a statistical sufficient number of large tsunamis, which entails that tsunami hazard has to be estimated by means of speculated worst-case scenarios, and their consequences are evaluated accordingly and usually result associated with large uncertainty bands. In case of large uncertainties, the main issues for geoscientists are how to communicate the information (prediction and uncertainties) to stakeholders and citizens and how to build and implement together responsive procedures that should be adequate. Usually there is a tradeoff between the cost of the countermeasure (warning and prevention) and its efficacy (i.e. its capability of minimizing the damage). The level of the acceptable tradeoff is an issue pertaining to decision makers and to local threatened communities. This paper, that represents a contribution from the European project TRIDEC on management of emergency crises, discusses the role of geoscientists in providing predictions and the related uncertainties. It is stressed that through academic education geoscientists are formed more to better their understanding of processes and the quantification of uncertainties, but are often unprepared to communicate their results in a way appropriate for society. Filling this gap is crucial for improving the way geoscience and society handle natural hazards and devise proper defense means.
NASA Astrophysics Data System (ADS)
Xu, Jun
Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the load, maximize its profit, and manage risks. In this topic, a mid-term power portfolio optimization problem with risk management is presented. Key instruments are considered, risk terms based on semi-variances of spot market transactions are introduced, and penalties on load obligation violations are added to the objective function to improve algorithm convergence and constraint satisfaction. To overcome the inseparability of the resulting problem, a surrogate optimization framework is developed enabling a decomposition and coordination approach. Numerical testing results show that our method effectively provides decisions for various instruments to maximize profit, manage risks, and is computationally efficient.
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world’s oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts. PMID:26226590
Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis
NASA Astrophysics Data System (ADS)
Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles
2018-03-01
We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.
NASA Astrophysics Data System (ADS)
Yuan, Fei; Zhao, Chongxu; Jiang, Yong; Ren, Liliang; Shan, Hongcui; Zhang, Limin; Zhu, Yonghua; Chen, Tao; Jiang, Shanhu; Yang, Xiaoli; Shen, Hongren
2017-11-01
Projections of hydrological changes are associated with large uncertainties from different sources, which should be quantified for an effective implementation of water management policies adaptive to future climate change. In this study, a modeling chain framework to project future hydrological changes and the associated uncertainties in the Xijiang River basin, South China, was established. The framework consists of three emission scenarios (ESs), four climate models (CMs), four statistical downscaling (SD) methods, four hydrological modeling (HM) schemes, and four probability distributions (PDs) for extreme flow frequency analyses. Direct variance method was adopted to analyze the manner by which uncertainty sources such as ES, CM, SD, and HM affect the estimates of future evapotranspiration (ET) and streamflow, and to quantify the uncertainties of PDs in future flood and drought risk assessment. Results show that ES is one of the least important uncertainty sources in most situations. CM, in general, is the dominant uncertainty source for the projections of monthly ET and monthly streamflow during most of the annual cycle, daily streamflow below the 99.6% quantile level, and extreme low flow. SD is the most predominant uncertainty source in the projections of extreme high flow, and has a considerable percentage of uncertainty contribution in monthly streamflow projections in July-September. The effects of SD in other cases are negligible. HM is a non-ignorable uncertainty source that has the potential to produce much larger uncertainties for the projections of low flow and ET in warm and wet seasons than for the projections of high flow. PD contributes a larger percentage of uncertainty in extreme flood projections than it does in extreme low flow estimates. Despite the large uncertainties in hydrological projections, this work found that future extreme low flow would undergo a considerable reduction, and a noticeable increase in drought risk in the Xijiang River basin would be expected. Thus, the necessity of employing effective water-saving techniques and adaptive water resources management strategies for drought disaster mitigation should be addressed.
Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models
NASA Astrophysics Data System (ADS)
Wellmann, J. Florian; Regenauer-Lieb, Klaus
2012-03-01
Analyzing, visualizing and communicating uncertainties are important issues as geological models can never be fully determined. To date, there exists no general approach to quantify uncertainties in geological modeling. We propose here to use information entropy as an objective measure to compare and evaluate model and observational results. Information entropy was introduced in the 50s and defines a scalar value at every location in the model for predictability. We show that this method not only provides a quantitative insight into model uncertainties but, due to the underlying concept of information entropy, can be related to questions of data integration (i.e. how is the model quality interconnected with the used input data) and model evolution (i.e. does new data - or a changed geological hypothesis - optimize the model). In other words information entropy is a powerful measure to be used for data assimilation and inversion. As a first test of feasibility, we present the application of the new method to the visualization of uncertainties in geological models, here understood as structural representations of the subsurface. Applying the concept of information entropy on a suite of simulated models, we can clearly identify (a) uncertain regions within the model, even for complex geometries; (b) the overall uncertainty of a geological unit, which is, for example, of great relevance in any type of resource estimation; (c) a mean entropy for the whole model, important to track model changes with one overall measure. These results cannot easily be obtained with existing standard methods. The results suggest that information entropy is a powerful method to visualize uncertainties in geological models, and to classify the indefiniteness of single units and the mean entropy of a model quantitatively. Due to the relationship of this measure to the missing information, we expect the method to have a great potential in many types of geoscientific data assimilation problems — beyond pure visualization.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
NASA Astrophysics Data System (ADS)
Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.
2018-05-01
Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.
NASA Astrophysics Data System (ADS)
Blum, David Arthur
Algae biodiesel is the sole sustainable and abundant transportation fuel source that can replace petrol diesel use; however, high competition and economic uncertainties exist, influencing independent venture capital decision making. Technology, market, management, and government action uncertainties influence competition and economic uncertainties in the venture capital industry. The purpose of this qualitative case study was to identify the best practice skills at IVC firms to predict uncertainty between early and late funding stages. The basis of the study was real options theory, a framework used to evaluate and understand the economic and competition uncertainties inherent in natural resource investment and energy derived from plant-based oils. Data were collected from interviews of 24 venture capital partners based in the United States who invest in algae and other renewable energy solutions. Data were analyzed by coding and theme development interwoven with the conceptual framework. Eight themes emerged: (a) expected returns model, (b) due diligence, (c) invest in specific sectors, (d) reduced uncertainty-late stage, (e) coopetition, (f) portfolio firm relationships, (g) differentiation strategy, and (h) modeling uncertainty and best practice. The most noteworthy finding was that predicting uncertainty at the early stage was impractical; at the expansion and late funding stages, however, predicting uncertainty was possible. The implications of these findings will affect social change by providing independent venture capitalists with best practice skills to increase successful exits, lessen uncertainty, and encourage increased funding of renewable energy firms, contributing to cleaner and healthier communities throughout the United States..
NASA Astrophysics Data System (ADS)
Zheng, Xiao-Tong; Hui, Chang; Yeh, Sang-Wook
2018-06-01
El Niño-Southern Oscillation (ENSO) is the dominant mode of variability in the coupled ocean-atmospheric system. Future projections of ENSO change under global warming are highly uncertain among models. In this study, the effect of internal variability on ENSO amplitude change in future climate projections is investigated based on a 40-member ensemble from the Community Earth System Model Large Ensemble (CESM-LE) project. A large uncertainty is identified among ensemble members due to internal variability. The inter-member diversity is associated with a zonal dipole pattern of sea surface temperature (SST) change in the mean along the equator, which is similar to the second empirical orthogonal function (EOF) mode of tropical Pacific decadal variability (TPDV) in the unforced control simulation. The uncertainty in CESM-LE is comparable in magnitude to that among models of the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting the contribution of internal variability to the intermodel uncertainty in ENSO amplitude change. However, the causations between changes in ENSO amplitude and the mean state are distinct between CESM-LE and CMIP5 ensemble. The CESM-LE results indicate that a large ensemble of 15 members is needed to separate the relative contributions to ENSO amplitude change over the twenty-first century between forced response and internal variability.
NASA Astrophysics Data System (ADS)
Jacobson, R. B.; Colvin, M. E.; Marmorek, D.; Randall, M.
2017-12-01
The Missouri River Recovery Program (MRRP) seeks to revise river-management strategies to avoid jeopardizing the existence of three species: pallid sturgeon (Scaphirhynchus albus), interior least tern (Sterna antillarum)), and piping plover (Charadrius melodus). Managing the river to maintain populations of the two birds (terns and plovers) is relatively straightforward: reproductive success can be modeled with some certainty as a direct, increasing function of exposed sandbar area. In contrast, the pallid sturgeon inhabits the benthic zone of a deep, turbid river and many parts of its complex life history are not directly observable. Hence, pervasive uncertainties exist about what factors are limiting population growth and what management actions may reverse population declines. These uncertainties are being addressed by the MRRP through a multi-step process. The first step was an Effects Analysis (EA), which: documented what is known and unknown about the river and the species; documented quality and quantity of existing information; used an expert-driven process to develop conceptual ecological models and to prioritize management hypotheses; and developed quantitative models linking management actions (flows, channel reconfigurations, and stocking) to population responses. The EA led to development of a science and adaptive-management plan with prioritized allocation of investment among 4 levels of effort ranging from fundamental research to full implementation. The plan includes learning from robust, hypothesis-driven effectiveness monitoring for all actions, with statistically sound experimental designs, multiple metrics, and explicit decision criteria to guide management. Finally, the science plan has been fully integrated with a new adaptive-management structure that links science to decision makers. The reinvigorated investment in science stems from the understanding that costly river-management decisions are not socially or politically supportable without better understanding of how this endangered fish will respond. While some hypotheses can be evaluated without actually implementing management actions in the river, assessing the effectiveness of other forms of habitat restoration requires in-river implementation within a rigorous experimental design.
NASA Astrophysics Data System (ADS)
Sherman, James P.; McComiskey, Allison
2018-03-01
Aerosol optical properties measured at Appalachian State University's co-located NASA AERONET and NOAA ESRL aerosol network monitoring sites over a nearly four-year period (June 2012-Feb 2016) are used, along with satellite-based surface reflectance measurements, to study the seasonal variability of diurnally averaged clear sky aerosol direct radiative effect (DRE) and radiative efficiency (RE) at the top-of-atmosphere (TOA) and at the surface. Aerosol chemistry and loading at the Appalachian State site are likely representative of the background southeast US (SE US), home to high summertime aerosol loading and one of only a few regions not to have warmed during the 20th century. This study is the first multi-year ground truth
DRE study in the SE US, using aerosol network data products that are often used to validate satellite-based aerosol retrievals. The study is also the first in the SE US to quantify DRE uncertainties and sensitivities to aerosol optical properties and surface reflectance, including their seasonal dependence.Median DRE for the study period is -2.9 W m-2 at the TOA and -6.1 W m-2 at the surface. Monthly median and monthly mean DRE at the TOA (surface) are -1 to -2 W m-2 (-2 to -3 W m-2) during winter months and -5 to -6 W m-2 (-10 W m-2) during summer months. The DRE cycles follow the annual cycle of aerosol optical depth (AOD), which is 9 to 10 times larger in summer than in winter. Aerosol RE is anti-correlated with DRE, with winter values 1.5 to 2 times more negative than summer values. Due to the large seasonal dependence of aerosol DRE and RE, we quantify the sensitivity of DRE to aerosol optical properties and surface reflectance, using a calendar day representative of each season (21 December for winter; 21 March for spring, 21 June for summer, and 21 September for fall). We use these sensitivities along with measurement uncertainties of aerosol optical properties and surface reflectance to calculate DRE uncertainties. We also estimate uncertainty in calculated diurnally-averaged DRE due to diurnal aerosol variability. Aerosol DRE at both the TOA and surface is most sensitive to changes in AOD, followed by single-scattering albedo (ω0). One exception is under the high summertime aerosol loading conditions (AOD ≥ 0.15 at 550 nm), when sensitivity of TOA DRE to ω0 is comparable to that of AOD. Aerosol DRE is less sensitive to changes in scattering asymmetry parameter (g) and surface reflectance (R). While DRE sensitivity to AOD varies by only ˜ 25 to 30 % with season, DRE sensitivity to ω0, g, and R largely follow the annual AOD cycle at APP, varying by factors of 8 to 15 with season. Since the measurement uncertainties of AOD, ω0, g, and R are comparable at Appalachian State, their relative contributions to DRE uncertainty are largely influenced by their (seasonally dependent) DRE sensitivity values, which suggests that the seasonal dependence of DRE uncertainty must be accounted for. Clear sky aerosol DRE uncertainty at the TOA (surface) due to measurement uncertainties ranges from 0.45 (0.75 W m-2) for December to 1.1 (1.6 W m-2) for June. Expressed as a fraction of DRE computed using monthly median aerosol optical properties and surface reflectance, the DRE uncertainties at TOA (surface) are 20 to 24 % (15 to 22 %) for March, June, and September and 49 (50 %) for DEC. The relatively low DRE uncertainties are largely due to the low uncertainty in AOD measured by AERONET. Use of satellite-based AOD measurements by MODIS in the DRE calculations increases DRE uncertainties by a factor of 2 to 5 and DRE uncertainties are dominated by AOD uncertainty for all seasons. Diurnal variability in AOD (and to a lesser extent g) contributes to uncertainties in DRE calculated using daily-averaged aerosol optical properties that are slightly larger (by ˜ 20 to 30 %) than DRE uncertainties due to measurement uncertainties during summer and fall, with comparable uncertainties during winter and spring.
Quantifying and Qualifying USGS ShakeMap Uncertainty
Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent
2008-01-01
We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions and numerous stations, depending on the density of station/data coverage. Due to these dependencies, the letter grade can change with subsequent ShakeMap revisions if more data are added or when finite-faulting dimensions are added. We emphasize that the greatest uncertainties are associated with unconstrained source dimensions for large earthquakes where the distance term in the GMPE is most uncertain; this uncertainty thus scales with magnitude (and consequently rupture dimension). Since this distance uncertainty produces potentially large uncertainties in ShakeMap ground-motion estimates, this factor dominates over compensating constraints for all but the most dense station distributions.
Uncertainties in building a strategic defense.
Zraket, C A
1987-03-27
Building a strategic defense against nuclear ballistic missiles involves complex and uncertain functional, spatial, and temporal relations. Such a defensive system would evolve and grow over decades. It is too complex, dynamic, and interactive to be fully understood initially by design, analysis, and experiments. Uncertainties exist in the formulation of requirements and in the research and design of a defense architecture that can be implemented incrementally and be fully tested to operate reliably. The analysis and measurement of system survivability, performance, and cost-effectiveness are critical to this process. Similar complexities exist for an adversary's system that would suppress or use countermeasures against a missile defense. Problems and opportunities posed by these relations are described, with emphasis on the unique characteristics and vulnerabilities of space-based systems.
Non-Static error tracking control for near space airship loading platform
NASA Astrophysics Data System (ADS)
Ni, Ming; Tao, Fei; Yang, Jiandong
2018-01-01
A control scheme based on internal model with non-static error is presented against the uncertainty of the near space airship loading platform system. The uncertainty in the tracking table is represented as interval variations in stability and control derivatives. By formulating the tracking problem of the uncertainty system as a robust state feedback stabilization problem of an augmented system, sufficient condition for the existence of robust tracking controller is derived in the form of linear matrix inequality (LMI). Finally, simulation results show that the new method not only has better anti-jamming performance, but also improves the dynamic performance of the high-order systems.
Considering Risk and Resilience in Decision-Making
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This paper examines the concepts of decision-making, risk analysis, uncertainty and resilience analysis. The relation between risk, vulnerability, and resilience is analyzed. The paper describes how complexity, uncertainty, and ambiguity are the most critical factors in the definition of the approach and criteria for decision-making. Uncertainty in its various forms is what limits our ability to offer definitive answers to questions about the outcomes of alternatives in a decision-making process. It is shown that, although resilience-informed decision-making would seem fundamentally different from risk-informed decision-making, this is not the case as resilience-analysis can be easily incorporated within existing analytic-deliberative decision-making frameworks.
The Social Construction of Uncertainty in Healthcare Delivery
NASA Astrophysics Data System (ADS)
Begun, James W.; Kaissi, Amer A.
We explore the following question: How would healthcare delivery be different if uncertainty were widely recognized, accurately diagnosed, and appropriately managed? Unlike most studies of uncertainty, we examine uncertainty at more than one level of analysis, considering uncertainty that arises at the patient-clinician interaction level and at the organizational level of healthcare delivery. We consider the effects of history, as the forces and systems that currently shape and manage uncertainty have emerged over a long time period. The purpose of this broad and speculative "thought exercise" is to generate greater sensemaking of the current state of healthcare delivery, particularly in the realm of organizational and public policy, and to generate new research questions about healthcare delivery. The discussion is largely based on experience in the United States, which may limit its generalizability.
Uncertainties and applications of satellite-derived coastal water quality products
NASA Astrophysics Data System (ADS)
Zheng, Guangming; DiGiacomo, Paul M.
2017-12-01
Recent and forthcoming launches of a plethora of ocean color radiometry sensors, coupled with increasingly adopted free and open data policies are expected to boost usage of satellite ocean color data and drive the demand to use these data in a quantitative and routine manner. Here we review factors that introduce uncertainties to various satellite-derived water quality products and recommend approaches to minimize the uncertainty of a specific product. We show that the regression relationships between remote-sensing reflectance and water turbidity (in terms of nephelometric units) established for different regions tend to converge and therefore it is plausible to develop a global satellite water turbidity product derived using a single algorithm. In contrast, solutions to derive suspended particulate matter concentration are much less generalizable; in one case it might be more accurate to estimate this parameter based on satellite-derived particulate backscattering coefficient, whereas in another the nonagal particulate absorption coefficient might be a better proxy. Regarding satellite-derived chlorophyll concentration, known to be subject to large uncertainties in coastal waters, studies summarized here clearly indicate that the accuracy of classical reflectance band-ratio algorithms depends largely on the contribution of phytoplankton to total light absorption coefficient as well as the degree of correlation between phytoplankton and the dominant nonalgal contributions. Our review also indicates that currently available satellite-derived water quality products are restricted to optically significant materials, whereas many users are interested in toxins, nutrients, pollutants, and pathogens. Presently, proxies or indicators for these constituents are inconsistently (and often incorrectly) developed and applied. Progress in this general direction will remain slow unless, (i) optical oceanographers and environmental scientists start collaborating more closely and make optical and environmental measurements in parallel, (ii) more efforts are devoted to identifying optical, ecological, and environmental forerunners of autochthonous water quality issues (e.g., onsite growth of pathogens), and, (iii) environmental processes associated with the source, transport, and transformation of allochthonous issues (e.g., transport of nutrients) are better understood. Accompanying these challenges, the need still exists to conduct fundamental research in satellite ocean color radiometry, including development of more robust atmospheric correction methods as well as inverse models for coastal regions where optical properties of both aerosols and hydrosols are complex.
Removal of Asperger's syndrome from the DSM V: community response to uncertainty.
Parsloe, Sarah M; Babrow, Austin S
2016-01-01
The May 2013 release of the new version of the Diagnostic and Statistical Manual of Mental Disorders (DSM V) subsumed Asperger's syndrome under the wider diagnostic label of autism spectrum disorder (ASD). The revision has created much uncertainty in the community affected by this condition. This study uses problematic integration theory and thematic analysis to investigate how participants in Wrong Planet, a large online community associated with autism and Asperger's syndrome, have constructed these uncertainties. The analysis illuminates uncertainties concerning both the likelihood of diagnosis and value of diagnosis, and it details specific issues within these two general areas of uncertainty. The article concludes with both conceptual and practical implications.
NASA Technical Reports Server (NTRS)
Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William
2017-01-01
Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.
ISMIP6 - initMIP: Greenland ice sheet model initialisation experiments
NASA Astrophysics Data System (ADS)
Goelzer, Heiko; Nowicki, Sophie; Payne, Tony; Larour, Eric; Abe Ouchi, Ayako; Gregory, Jonathan; Lipscomb, William; Seroussi, Helene; Shepherd, Andrew; Edwards, Tamsin
2016-04-01
Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. This intercomparison exercise (initMIP) aims at comparing, evaluating and improving the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experiments are conceived for the large-scale Greenland ice sheet and are designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The latter experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss first results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.; Vigier, Jean-Pierre
2013-09-01
In this work we extend Vigier's recent theory of `tight bound state' (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a `super quantum potential' extended in HAM cosmology to be synonymous with the a `force of coherence' inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Propagation of nuclear data uncertainties for fusion power measurements
NASA Astrophysics Data System (ADS)
Sjöstrand, Henrik; Conroy, Sean; Helgesson, Petter; Hernandez, Solis Augusto; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri
2017-09-01
Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
NASA Astrophysics Data System (ADS)
Li, Ziyi
2017-12-01
Generalized uncertainty principle (GUP), also known as the generalized uncertainty relationship, is the modified form of the classical Heisenberg’s Uncertainty Principle in special cases. When we apply quantum gravity theories such as the string theory, the theoretical results suggested that there should be a “minimum length of observation”, which is about the size of the Planck-scale (10-35m). Taking into account the basic scale of existence, we need to fix a new common form of Heisenberg’s uncertainty principle in the thermodynamic system and make effective corrections to statistical physical questions concerning about the quantum density of states. Especially for the condition at high temperature and high energy levels, generalized uncertainty calculations have a disruptive impact on classical statistical physical theories but the present theory of Femtosecond laser is still established on the classical Heisenberg’s Uncertainty Principle. In order to improve the detective accuracy and temporal resolution of the Femtosecond laser, we applied the modified form of generalized uncertainty principle to the wavelength, energy and pulse time of Femtosecond laser in our work. And we designed three typical systems from micro to macro size to estimate the feasibility of our theoretical model and method, respectively in the chemical solution condition, crystal lattice condition and nuclear fission reactor condition.
Application of fuzzy system theory in addressing the presence of uncertainties
NASA Astrophysics Data System (ADS)
Yusmye, A. Y. N.; Goh, B. Y.; Adnan, N. F.; Ariffin, A. K.
2015-02-01
In this paper, the combinations of fuzzy system theory with the finite element methods are present and discuss to deal with the uncertainties. The present of uncertainties is needed to avoid for prevent the failure of the material in engineering. There are three types of uncertainties, which are stochastic, epistemic and error uncertainties. In this paper, the epistemic uncertainties have been considered. For the epistemic uncertainty, it exists as a result of incomplete information and lack of knowledge or data. Fuzzy system theory is a non-probabilistic method, and this method is most appropriate to interpret the uncertainty compared to statistical approach when the deal with the lack of data. Fuzzy system theory contains a number of processes started from converting the crisp input to fuzzy input through fuzzification process and followed by the main process known as mapping process. The term mapping here means that the logical relationship between two or more entities. In this study, the fuzzy inputs are numerically integrated based on extension principle method. In the final stage, the defuzzification process is implemented. Defuzzification is an important process to allow the conversion of the fuzzy output to crisp outputs. Several illustrative examples are given and from the simulation, the result showed that propose the method produces more conservative results comparing with the conventional finite element method.
NASA Astrophysics Data System (ADS)
Munoz-Jaramillo, Andres
2017-08-01
Data products in heliospheric physics are very often provided without clear estimates of uncertainty. From helioseismology in the solar interior, all the way to in situ solar wind measurements beyond 1AU, uncertainty estimates are typically hard for users to find (buried inside long documents that are separate from the data products), or simply non-existent.There are two main reasons why uncertainty measurements are hard to find:1. Understanding instrumental systematic errors is given a much higher priority inside instrumental teams.2. The desire to perfectly understand all sources of uncertainty postpones indefinitely the actual quantification of uncertainty in our measurements.Using the cross calibration of 200 years of sunspot area measurements as a case study, in this presentation we will discuss the negative impact that inadequate measurements of uncertainty have on users, through the appearance of toxic and unnecessary controversies, and data providers, through the creation of unrealistic expectations regarding the information that can be extracted from their data. We will discuss how empirical estimates of uncertainty represent a very good alternative to not providing any estimates at all, and finalize by discussing the bare essentials that should become our standard practice for future instruments and surveys.
Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; ...
2015-11-13
Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peakmore » clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This study presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. Finally, in order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.« less
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.
HÉCTOR A.A brief introductory survey of Unified Field Mechanics (UFM) is given from the perspective of a Holographic Anthropic Multiverse cosmology in 12 `continuous-state' dimensions. The paradigm with many new parameters is cast in a scale-invariant conformal covariant Dirac polarized vacuum utilizing extended HD forms of the de Broglie-Bohm and Cramer interpretations of quantum theory. The model utilizes a unique form of M-Theory based in part on the original hadronic form of string theory that had a variable string tension, TS and included a tachyon. The model is experimentally testable, thus putatively able to demonstrate the existence of large-scale additional dimensionality (LSXD), test for QED violating tight-bound state spectral lines in hydrogen `below' the lowest Bohr orbit, and surmount the quantum uncertainty principle utilizing a hyperincursive Sagnac Effect resonance hierarchy.
CONSOL`s perspective on CCT deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, F.P.; Statnick, R.M.
1997-12-31
The principal focus of government investment in Clean Coal Technology must be to serve the interests of the US energy consumer. Because of its security of supply and low cost, coal will continue to be the fuel of choice in the existing domestic electricity generating market. The ability of coal to compete for new generating capacity will depend largely on natural gas prices and the efficiency of coal and gas-fired generating options. Furthermore, potential environmental regulations, coupled with utility deregulation, create a climate of economic uncertainty that may limit future investment decisions favorable to coal. Therefore, the federal government, throughmore » programs such as CCT, should promote the development of greenfield and retrofit coal use technology that improves generating efficiency and meets environmental requirements for the domestic electric market.« less
Preliminary evaluation of the role of K2S in MHD hot stream seed recovery
NASA Technical Reports Server (NTRS)
Bennett, J. E.; Kohl, F. J.
1979-01-01
Results are presented for recent analytical and experimental studies of the role of K2S in MHD hot stream seed recovery. The existing thermodynamic data base was found to contain large uncertainties and to be nonexistent for vapor phase K2S. Knudsen cell mass spectrometric experiments were undertaken to determine the vapor species in equilibrium with K2S(c). K atoms and S2 molecules ere found to be the major vapor phase species in vacuum, accounting for greater than 99 percent of the vapor phase. Combustion gas deposition studies using No. 2 Diesel fuel were also undertaken and revealed that condensed phase K2SO3 may potentially be an important compound in the MHD stream at near-stoichiometric combustion.
Threshold concepts: implications for the management of natural resources
Guntenspergen, Glenn R.; Gross, John
2014-01-01
Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.
Review of health and productivity gains from better IEQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, William J.
2000-08-01
The available scientific data suggest that existing technologies and procedures can improve indoor environmental quality (IEQ) in a manner that significantly increases productivity and health. While there is considerable uncertainty in the estimates of the magnitudes of productivity gains that may be obtained, the projected gains are very large. For the U.S., the estimated potential annual savings and productivity gains are $6 to $14 billion from reduced respiratory disease, $2 to $4 billion from reduced allergies and asthma, $10 to $30 billion from reduced sick building syndrome symptoms, and $20 to $160 billion from direct improvements in worker performance thatmore » are unrelated to health. Productivity gains that are quantified and demonstrated could serve as a strong stimulus for energy efficiency measures that simultaneously improve the indoor environment.« less
Uncertainty quantification in flood risk assessment
NASA Astrophysics Data System (ADS)
Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto
2017-04-01
Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.
Uncertainty Analysis on Heat Transfer Correlations for RP-1 Fuel in Copper Tubing
NASA Technical Reports Server (NTRS)
Driscoll, E. A.; Landrum, D. B.
2004-01-01
NASA is studying kerosene (RP-1) for application in Next Generation Launch Technology (NGLT). Accurate heat transfer correlations in narrow passages at high temperatures and pressures are needed. Hydrocarbon fuels, such as RP-1, produce carbon deposition (coke) along the inside of tube walls when heated to high temperatures. A series of tests to measure the heat transfer using RP-1 fuel and examine the coking were performed in NASA Glenn Research Center's Heated Tube Facility. The facility models regenerative cooling by flowing room temperature RP-1 through resistively heated copper tubing. A Regression analysis is performed on the data to determine the heat transfer correlation for Nusselt number as a function of Reynolds and Prandtl numbers. Each measurement and calculation is analyzed to identify sources of uncertainty, including RP-1 property variations. Monte Carlo simulation is used to determine how each uncertainty source propagates through the regression and an overall uncertainty in predicted heat transfer coefficient. The implications of these uncertainties on engine design and ways to minimize existing uncertainties are discussed.
Cloud fraction at the ARM SGP site: reducing uncertainty with self-organizing maps
NASA Astrophysics Data System (ADS)
Kennedy, Aaron D.; Dong, Xiquan; Xi, Baike
2016-04-01
Instrument downtime leads to uncertainty in the monthly and annual record of cloud fraction (CF), making it difficult to perform time series analyses of cloud properties and perform detailed evaluations of model simulations. As cloud occurrence is partially controlled by the large-scale atmospheric environment, this knowledge is used to reduce uncertainties in the instrument record. Synoptic patterns diagnosed from the North American Regional Reanalysis (NARR) during the period 1997-2010 are classified using a competitive neural network known as the self-organizing map (SOM). The classified synoptic states are then compared to the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) instrument record to determine the expected CF. A number of SOMs are tested to understand how the number of classes and the period of classifications impact the relationship between classified states and CFs. Bootstrapping is utilized to quantify the uncertainty of the instrument record when statistical information from the SOM is included. Although all SOMs significantly reduce the uncertainty of the CF record calculated in Kennedy et al. (Theor Appl Climatol 115:91-105, 2014), SOMs with a large number of classes and separated by month are required to produce the lowest uncertainty and best agreement with the annual cycle of CF. This result may be due to a manifestation of seasonally dependent biases in NARR. With use of the SOMs, the average uncertainty in monthly CF is reduced in half from the values calculated in Kennedy et al. (Theor Appl Climatol 115:91-105, 2014).
Reusable launch vehicle model uncertainties impact analysis
NASA Astrophysics Data System (ADS)
Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng
2018-03-01
Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).
On the characteristics of aerosol indirect effect based on dynamic regimes in global climate models
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.; ...
2016-03-04
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
The state of the art of flood forecasting - Hydrological Ensemble Prediction Systems
NASA Astrophysics Data System (ADS)
Thielen-Del Pozo, J.; Pappenberger, F.; Salamon, P.; Bogner, K.; Burek, P.; de Roo, A.
2010-09-01
Flood forecasting systems form a key part of ‘preparedness' strategies for disastrous floods and provide hydrological services, civil protection authorities and the public with information of upcoming events. Provided the warning leadtime is sufficiently long, adequate preparatory actions can be taken to efficiently reduce the impacts of the flooding. Because of the specific characteristics of each catchment, varying data availability and end-user demands, the design of the best flood forecasting system may differ from catchment to catchment. However, despite the differences in concept and data needs, there is one underlying issue that spans across all systems. There has been an growing awareness and acceptance that uncertainty is a fundamental issue of flood forecasting and needs to be dealt with at the different spatial and temporal scales as well as the different stages of the flood generating processes. Today, operational flood forecasting centres change increasingly from single deterministic forecasts to probabilistic forecasts with various representations of the different contributions of uncertainty. The move towards these so-called Hydrological Ensemble Prediction Systems (HEPS) in flood forecasting represents the state of the art in forecasting science, following on the success of the use of ensembles for weather forecasting (Buizza et al., 2005) and paralleling the move towards ensemble forecasting in other related disciplines such as climate change predictions. The use of HEPS has been internationally fostered by initiatives such as "The Hydrologic Ensemble Prediction Experiment" (HEPEX), created with the aim to investigate how best to produce, communicate and use hydrologic ensemble forecasts in hydrological short-, medium- und long term prediction of hydrological processes. The advantages of quantifying the different contributions of uncertainty as well as the overall uncertainty to obtain reliable and useful flood forecasts also for extreme events, has become evident. However, despite the demonstrated advantages, worldwide the incorporation of HEPS in operational flood forecasting is still limited. The applicability of HEPS for smaller river basins was tested in MAP D-Phase, an acronym for "Demonstration of Probabilistic Hydrological and Atmospheric Simulation of flood Events in the Alpine region" which was launched in 2005 as a Forecast Demonstration Project of World Weather Research Programme of WMO, and entered a pre-operational and still active testing phase in 2007. In Europe, a comparatively high number of EPS driven systems for medium-large rivers exist. National flood forecasting centres of Sweden, Finland and the Netherlands, have already implemented HEPS in their operational forecasting chain, while in other countries including France, Germany, Czech Republic and Hungary, hybrids or experimental chains have been installed. As an example of HEPS, the European Flood Alert System (EFAS) is being presented. EFAS provides medium-range probabilistic flood forecasting information for large trans-national river basins. It incorporates multiple sets of weather forecast including different types of EPS and deterministic forecasts from different providers. EFAS products are evaluated and visualised as exceedance of critical levels only - both in forms of maps and time series. Different sources of uncertainty and its impact on the flood forecasting performance for every grid cell has been tested offline but not yet incorporated operationally into the forecasting chain for computational reasons. However, at stations where real-time discharges are available, a hydrological uncertainty processor is being applied to estimate the total predictive uncertainty from the hydrological and input uncertainties. Research on long-term EFAS results has shown the need for complementing statistical analysis with case studies for which examples will be shown.
Satellite Re-entry Modeling and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Horsley, M.
2012-09-01
LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty quantification results from the recent uncontrolled re-entry of the Phobos-Grunt satellite will be presented and discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Métadier, M; Bertrand-Krajewski, J-L
2011-01-01
With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
Six Degree-of-Freedom Entry Dispersion Analysis for the METEOR Recovery Module
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Braun, Robert D.; Powell, Richard W.; Engelund, Walter C.; Tartabini, Paul V.
1996-01-01
The present study performs a six degree-of-freedom entry dispersion analysis for the Multiple Experiment Transporter to Earth Orbit and Return (METEOR) mission. METEOR offered the capability of flying a recoverable science package in a microgravity environment. However, since the Recovery Module has no active control system, an accurate determination of the splashdown position is difficult because no opportunity exists to remove any errors. Hence, uncertainties in the initial conditions prior to deorbit burn initiation, during deorbit burn and exo-atmospheric coast phases, and during atmospheric flight impact the splashdown location. This investigation was undertaken to quantify the impact of the various exo-atmospheric and atmospheric uncertainties. Additionally, a Monte-Carlo analysis was performed to statistically assess the splashdown dispersion footprint caused by the multiple mission uncertainties. The Monte-Carlo analysis showed that a 3-sigma splashdown dispersion footprint with axes of 43.3 nm (long), -33.5 nm (short), and 10.0 nm (crossrange) can be constructed. A 58% probability exists that the Recovery Module will overshoot the nominal splashdown site.
Research on green supply chain coordination strategy for uncertain market demand.
Cao, Jian; Chen, Yangyang; Lu, Bo; Tong, Chenlu; Zhou, Gengui
2015-03-01
Based on the status that the green market began to develop (e.g. pharmaceutical industry) in Mainland China, the paper mainly discusses how members of the green supply chain (GSC) cooperate effectively in the process of the supply chain operations. For the uncertainties existing in the market demand of the green products, the GSC coordination strategy is put forward based on the Stackelberg game that the manufacturer is the leader and distributors are the followers. The relationship between the proposed coordination strategy and several factors including the distributor's amount, the distributor's risk aversion and the uncertainties of market demand are analyzed. It indicates that, when there are uncertainties existing in the market demand of the green product, the revenue of each enterprise, the overall revenue and the customer's welfare all decrease; while the increase in the number of distributors and low risk aversion of them are beneficial to the entire GSC and the customer. The conclusions have good guidance for the operational decisions of the green supply chain when the green market is in its initial formation.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Power oscillation suppression by robust SMES in power system with large wind power penetration
NASA Astrophysics Data System (ADS)
Ngamroo, Issarachai; Cuk Supriyadi, A. N.; Dechanupaprittha, Sanchai; Mitani, Yasunori
2009-01-01
The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Assessing climate change impacts on water resources in remote mountain regions
NASA Astrophysics Data System (ADS)
Buytaert, Wouter; De Bièvre, Bert
2013-04-01
From a water resources perspective, remote mountain regions are often considered as a basket case. They are often regions where poverty is often interlocked with multiple threats to water supply, data scarcity, and high uncertainties. In these environments, it is paramount to generate locally relevant knowledge about water resources and how they impact local livelihoods. This is often problematic. Existing environmental data collection tends to be geographically biased towards more densely populated regions, and prioritized towards strategic economic activities. Data may also be locked behind institutional and technological barriers. These issues create a "knowledge trap" for data-poor regions, which is especially acute in remote and hard-to-reach mountain regions. We present lessons learned from a decade of water resources research in remote mountain regions of the Andes, Africa and South Asia. We review the entire tool chain of assessing climate change impacts on water resources, including the interrogation and downscaling of global circulation models, translating climate variables in water availability and access, and assessing local vulnerability. In global circulation models, mountain regions often stand out as regions of high uncertainties and lack of agreement of future trends. This is partly a technical artifact because of the different resolution and representation of mountain topography, but it also highlights fundamental uncertainties in climate impacts on mountain climate. This problem also affects downscaling efforts, because regional climate models should be run in very high spatial resolution to resolve local gradients, which is computationally very expensive. At the same time statistical downscaling methods may fail to find significant relations between local climate properties and synoptic processes. Further uncertainties are introduced when downscaled climate variables such as precipitation and temperature are to be translated in hydrologically relevant variables such as streamflow and groundwater recharge. Fundamental limitations in both the understanding of hydrological processes in mountain regions (e.g., glacier melt, wetland attenuation, groundwater flows) and in data availability introduce large uncertainties. Lastly, assessing access to water resources is a major challenge. Topographical gradients and barriers, as well as strong spatiotemporal variations in hydrological processes, makes it particularly difficult to assess which parts of the mountain population is most vulnerable to future perturbations of the water cycle.
Allowable Trajectory Variations for Space Shuttle Orbiter Entry-Aeroheating CFD
NASA Technical Reports Server (NTRS)
Wood, William A.; Alter, Stephen J.
2008-01-01
Reynolds-number criteria are developed for acceptable variations in Space Shuttle Orbiter entry trajectories for use in computational aeroheating analyses. The criteria determine if an existing computational fluid dynamics solution for a particular trajectory can be extrapolated to a different trajectory. The criteria development begins by estimating uncertainties for seventeen types of computational aeroheating data, such as boundary layer thickness, at exact trajectory conditions. For each type of datum, the allowable uncertainty contribution due to trajectory variation is set to be half of the value of the estimated exact-trajectory uncertainty. Then, for the twelve highest-priority datum types, Reynolds-number relations between trajectory variation and output uncertainty are determined. From these relations the criteria are established for the maximum allowable trajectory variations. The most restrictive criterion allows a 25% variation in Reynolds number at constant Mach number between trajectories.
Effects of aerodynamic heating and TPS thermal performance uncertainties on the Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Goodrich, W. D.; Derry, S. M.; Maraia, R. J.
1980-01-01
A procedure for estimating uncertainties in the aerodynamic-heating and thermal protection system (TPS) thermal-performance methodologies developed for the Shuttle Orbiter is presented. This procedure is used in predicting uncertainty bands around expected or nominal TPS thermal responses for the Orbiter during entry. Individual flowfield and TPS parameters that make major contributions to these uncertainty bands are identified and, by statistical considerations, combined in a manner suitable for making engineering estimates of the TPS thermal confidence intervals and temperature margins relative to design limits. Thus, for a fixed TPS design, entry trajectories for future Orbiter missions can be shaped subject to both the thermal-margin and confidence-interval requirements. This procedure is illustrated by assessing the thermal margins offered by selected areas of the existing Orbiter TPS design for an entry trajectory typifying early flight test missions.
NASA Astrophysics Data System (ADS)
Justino, Júlia
2017-06-01
Matrices with coefficients having uncertainties of type o (.) or O (.), called flexible matrices, are studied from the point of view of nonstandard analysis. The uncertainties of the afore-mentioned kind will be given in the form of the so-called neutrices, for instance the set of all infinitesimals. Since flexible matrices have uncertainties in their coefficients, it is not possible to define the identity matrix in an unique way and so the notion of spectral identity matrix arises. Not all nonsingular flexible matrices can be turned into a spectral identity matrix using Gauss-Jordan elimination method, implying that that not all nonsingular flexible matrices have the inverse matrix. Under certain conditions upon the size of the uncertainties appearing in a nonsingular flexible matrix, a general theorem concerning the boundaries of its minors is presented which guarantees the existence of the inverse matrix of a nonsingular flexible matrix.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Zeng-Treitler, Qing; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty.
Direct Imaging of a Cold Jovian Exoplanet in Orbit around the Sun-Like Star GJ 504
NASA Technical Reports Server (NTRS)
Kuzuhara, M.; Tamura, M.; Kudo, T.; Janson, M; Kandori, R.; Brandt, T. D.; Thalmann, C.; Spiegel, D.; Biller, B.; Carson, J.;
2013-01-01
Several exoplanets have recently been imaged at wide separations of >10 AU from their parent stars. These span a limited range of ages (<50 Myr) and atmospheric properties, with temperatures of 800-1800 K and very red colors (J -H > 0.5 mag), implying thick cloud covers. Furthermore, substantial model uncertainties exist at these young ages due to the unknown initial conditions at formation, which can lead to an order of magnitude of uncertainty in the modeled planet mass. Here, we report the direct imaging discovery of a Jovian exoplanet around the Sun-like star GJ 504, detected as part of the SEEDS survey. The system is older than all other known directly-imaged planets; as a result, its estimated mass remains in the planetary regime independent of uncertainties related to choices of initial conditions in the exoplanet modeling. Using the most common exoplanet cooling model, and given the system age of 160(+350/-60) Myr, GJ 504 b has an estimated mass of 4(+4.5/-1.0) Jupiter masses, among the lowest of directly imaged planets. Its projected separation of 43.5 AU exceeds the typical outer boundary of approx.. 30 AU predicted for the core accretion mechanism. GJ 504 b is also significantly cooler (510(+30/-20) K)) and has a bluer color (J - H = -0.23 mag) than previously imaged exoplanets, suggesting a largely cloud-free atmosphere accessible to spectroscopic characterization. Thus, it has the potential of providing novel insights into the origins of giant planets, as well as their atmospheric properties.
NASA Astrophysics Data System (ADS)
Xu, Rongting; Tian, Hanqin; Lu, Chaoqun; Pan, Shufen; Chen, Jian; Yang, Jia; Zhang, Bowen
2017-07-01
To accurately assess how increased global nitrous oxide (N2O) emission has affected the climate system requires a robust estimation of the preindustrial N2O emissions since only the difference between current and preindustrial emissions represents net drivers of anthropogenic climate change. However, large uncertainty exists in previous estimates of preindustrial N2O emissions from the land biosphere, while preindustrial N2O emissions on the finer scales, such as regional, biome, or sector scales, have not been well quantified yet. In this study, we applied a process-based Dynamic Land Ecosystem Model (DLEM) to estimate the magnitude and spatial patterns of preindustrial N2O fluxes at the biome, continental, and global level as driven by multiple environmental factors. Uncertainties associated with key parameters were also evaluated. Our study indicates that the mean of the preindustrial N2O emission was approximately 6.20 Tg N yr-1, with an uncertainty range of 4.76 to 8.13 Tg N yr-1. The estimated N2O emission varied significantly at spatial and biome levels. South America, Africa, and Southern Asia accounted for 34.12, 23.85, and 18.93 %, respectively, together contributing 76.90 % of global total emission. The tropics were identified as the major source of N2O released into the atmosphere, accounting for 64.66 % of the total emission. Our multi-scale estimates provide a robust reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere
NASA Astrophysics Data System (ADS)
Melious, J. O.
2012-12-01
In the northwestern corner of Washington state, a large landslide on Sumas Mountain deposits more than 100,000 cubic yards of soil containing asbestos fibers and heavy metals into Swift Creek every year. Engineers predict that asbestos-laden soils will slide into Swift Creek for at least the next 400 years. Swift Creek joins the Sumas River, which crosses the border into Canada, serving as an international delivery system for asbestos-laden soils. When the rivers flood, as happens regularly, they deliver asbestos into field, yards, and basements. The tools available to address the Swift Creek situation are at odds with the scope and nature of the problem. Asbestos regulation primarily addresses occupational settings, where exposures can be estimated. Hazardous waste regulation primarily addresses liability for abandoned waste products from human activities. Health and environmental issues relating to naturally occurring asbestos (NOA) are fundamentally different from either regulatory scheme. Liability is not a logical lever for a naturally occurring substance, the existence of which is nobody's fault, and exposures to NOA in the environment do not necessarily resemble occupational exposures. The gaps and flaws in the legal regime exacerbate the uncertainties created by uncertainties in the science. Once it is assumed that no level of exposure is safe, legal requirements adopted in very different contexts foreclose the options for addressing the Swift Creek problem. This presentation will outline the applicable laws and how they intersect with issues of risk perception, uncertainty and politics in efforts to address the Swift Creek NOA site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzuhara, M.; Tamura, M.; Kandori, R.
Several exoplanets have recently been imaged at wide separations of >10 AU from their parent stars. These span a limited range of ages (<50 Myr) and atmospheric properties, with temperatures of 800-1800 K and very red colors (J - H > 0.5 mag), implying thick cloud covers. Furthermore, substantial model uncertainties exist at these young ages due to the unknown initial conditions at formation, which can lead to an order of magnitude of uncertainty in the modeled planet mass. Here, we report the direct-imaging discovery of a Jovian exoplanet around the Sun-like star GJ 504, detected as part of themore » SEEDS survey. The system is older than all other known directly imaged planets; as a result, its estimated mass remains in the planetary regime independent of uncertainties related to choices of initial conditions in the exoplanet modeling. Using the most common exoplanet cooling model, and given the system age of 160{sup +350}{sub -60} Myr, GJ 504b has an estimated mass of 4{sup +4.5}{sub -1.0} Jupiter masses, among the lowest of directly imaged planets. Its projected separation of 43.5 AU exceeds the typical outer boundary of {approx}30 AU predicted for the core accretion mechanism. GJ 504b is also significantly cooler (510{sup +30}{sub -20} K) and has a bluer color (J - H = -0.23 mag) than previously imaged exoplanets, suggesting a largely cloud-free atmosphere accessible to spectroscopic characterization. Thus, it has the potential of providing novel insights into the origins of giant planets as well as their atmospheric properties.« less
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zimmermann, N. E.; Poulter, B.
2015-12-01
Simulations of the spatial-temporal dynamics of wetlands is key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate global wetland dynamics. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl DGVM, and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. We found that calibrating TOPMODEL with a benchmark dataset can help to successfully predict the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetland among three DEM products. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlight the importance of an adequate understanding of topographic indices for simulating global wetlands and show the opportunity to converge wetland estimations in LSMs by identifying the uncertainty associated with existing wetland products.
NASA Astrophysics Data System (ADS)
Wild, T. B.; Reed, P. M.; Loucks, D. P.
2015-12-01
The Mekong River basin in Southeast Asia is undergoing intensive and pervasive hydropower development to satisfy demand for increased energy and income to support its growing population of 60 million people. Just 20 years ago this river flowed freely. Today some 30 large dams exist in the basin, and over 100 more are being planned for construction. These dams will alter the river's natural water, sediment and nutrient flows, thereby impacting river morphology and ecosystems, and will fragment fish migration pathways. In doing so, they will degrade one of the world's most valuable and productive freshwater fish habitats. For those dams that have not yet been constructed, there still exist opportunities to modify their siting, design and operation (SDO) to potentially achieve a more balanced set of tradeoffs among hydropower production, sediment/nutrient passage and fish passage. We introduce examples of such alternative SDO opportunities for Sambor Dam in Cambodia, planned to be constructed on the main stem of the Mekong River. To evaluate the performance of such alternatives, we developed a Python-based simulation tool called PySedSim. PySedSim is a daily time step mass balance model that identifies the relative tradeoffs among hydropower production, and flow and sediment regime alteration, associated with reservoir sediment management techniques such as flushing, sluicing, bypassing, density current venting and dredging. To date, there has been a very limited acknowledgement or evaluation of the significant uncertainties that impact the evaluation of SDO alternatives. This research is formalizing a model diagnostic assessment of the key assumptions and parametric uncertainties that strongly influence PySedSim SDO evaluations. Using stochastic hydrology and sediment load data, our diagnostic assessment evaluates and compares several Sambor Dam alternatives using several performance measures related to energy production, sediment trapping and regime alteration, and fish passage. We show that performance of the alternatives can be highly variable, and conduct a simultaneous multi-parameter factor screening sensitivity analysis to identify the subset of PySedSim model parameters that contribute most significantly to performance uncertainties in attempts to identify the more robust options.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.