Sample records for quantifying aggregated uncertainty

  1. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  2. Neural methods based on modified reputation rules for detection and identification of intrusion attacks in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2010-04-01

    Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical issue in mitigating the effects of intrusion attacks. Published research has introduced the concept of the trustworthiness (reputation) of a single sensor node. Reputation is evaluated using an information-theoretic concept, the Kullback- Leibler (KL) distance. Reputation is added to the set of security features. In data aggregation, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As aggregate information is disseminated along routes to the sink node(s), its corresponding opinion is propagated and regulated by Josang's belief model. By applying subjective logic on the opinion to manage trust propagation, the uncertainty inherent in aggregation results can be quantified for use in decision making. The concepts of reputation and opinion are modified to allow their application to a class of dynamic WSNs. Using reputation as a factor in determining interim aggregate information is equivalent to implementation of a reputation-based security filter at each processing stage of data fusion, thereby improving the intrusion detection and identification results based on unsupervised techniques. In particular, the reputation-based version of the probabilistic neural network (PNN) learns the signature of normal network traffic with the random probability weights normally used in the PNN replaced by the trust-based quantified reputations of sensor data or subsequent aggregation results generated by the sequential implementation of a version of Josang's belief model. A two-stage, intrusion detection and identification algorithm is implemented to overcome the problems of large sensor data loads and resource restrictions in WSNs. Performance of the twostage algorithm is assessed in simulations of WSN scenarios with multiple sensors at edge nodes for known intrusion attacks. Simulation results show improved robustness of the two-stage design based on reputation-based NNs to intrusion anomalies from compromised nodes and external intrusion attacks.

  3. Sensitivity test and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Selva, J.; Costa, A.; De Natale, G.; Di Vito, M. A.; Isaia, R.; Macedonio, G.

    2018-02-01

    We present the results of a statistical study on tephra dispersal in the case of a reactivation of the Campi Flegrei volcano. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Excluding the lava emission, three classes are explosive (Small, Medium, and Large) and can produce a significant quantity of volcanic ash. Hazard assessments were made through simulations of atmospheric dispersion of ash and lapilli, considering the full variability of winds and eruptive vents. The results are presented in form of conditional hazard curves given the occurrence of specific eruptive sizes, representative members of each size class, and then combined to quantify the conditional hazard given an eruption of any size. The main focus of this analysis was to constrain the epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena), in order to provide unbiased hazard estimations. The epistemic uncertainty on the estimation of hazard curves was quantified, making use of scientifically acceptable alternatives to be aggregated in the final results. The choice of such alternative models was made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of submarine eruptive vents and tephra total grain-size distributions (TGSD) with a different relative mass fraction of fine ash, and the effect of ash aggregation. The results showed that the dominant uncertainty is related to the combined effect of the uncertainty with regard to the fraction of fine particles with respect to the total mass and on how ash aggregation is modelled. The latter is particularly relevant in the case of magma-water interactions during explosive eruptive phases, when a large fraction of fine ash can form accretionary lapilli that might contribute significantly in increasing the tephra load in the proximal areas. The variability induced by the use of different meteorological databases and the selected approach to modelling offshore eruptions were relatively insignificant. The uncertainty arising from the alternative implementations, which would have been neglected in standard (Bayesian) quantifications, were finally quantified by ensemble modelling, and represented by hazard and probability maps produced at different confidence levels.

  4. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    PubMed

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  5. Error Analysis of CM Data Products Sources of Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across amore » wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.« less

  6. Implications of climate change for agricultural productivity in the early twenty-first century.

    PubMed

    Gornall, Jemma; Betts, Richard; Burke, Eleanor; Clark, Robin; Camp, Joanne; Willett, Kate; Wiltshire, Andrew

    2010-09-27

    This paper reviews recent literature concerning a wide range of processes through which climate change could potentially impact global-scale agricultural productivity, and presents projections of changes in relevant meteorological, hydrological and plant physiological quantities from a climate model ensemble to illustrate key areas of uncertainty. Few global-scale assessments have been carried out, and these are limited in their ability to capture the uncertainty in climate projections, and omit potentially important aspects such as extreme events and changes in pests and diseases. There is a lack of clarity on how climate change impacts on drought are best quantified from an agricultural perspective, with different metrics giving very different impressions of future risk. The dependence of some regional agriculture on remote rainfall, snowmelt and glaciers adds to the complexity. Indirect impacts via sea-level rise, storms and diseases have not been quantified. Perhaps most seriously, there is high uncertainty in the extent to which the direct effects of CO(2) rise on plant physiology will interact with climate change in affecting productivity. At present, the aggregate impacts of climate change on global-scale agricultural productivity cannot be reliably quantified.

  7. Implications of climate change for agricultural productivity in the early twenty-first century

    PubMed Central

    Gornall, Jemma; Betts, Richard; Burke, Eleanor; Clark, Robin; Camp, Joanne; Willett, Kate; Wiltshire, Andrew

    2010-01-01

    This paper reviews recent literature concerning a wide range of processes through which climate change could potentially impact global-scale agricultural productivity, and presents projections of changes in relevant meteorological, hydrological and plant physiological quantities from a climate model ensemble to illustrate key areas of uncertainty. Few global-scale assessments have been carried out, and these are limited in their ability to capture the uncertainty in climate projections, and omit potentially important aspects such as extreme events and changes in pests and diseases. There is a lack of clarity on how climate change impacts on drought are best quantified from an agricultural perspective, with different metrics giving very different impressions of future risk. The dependence of some regional agriculture on remote rainfall, snowmelt and glaciers adds to the complexity. Indirect impacts via sea-level rise, storms and diseases have not been quantified. Perhaps most seriously, there is high uncertainty in the extent to which the direct effects of CO2 rise on plant physiology will interact with climate change in affecting productivity. At present, the aggregate impacts of climate change on global-scale agricultural productivity cannot be reliably quantified. PMID:20713397

  8. Mapping Uncertainty Due to Missing Data in the Global Ocean Health Index.

    PubMed

    Frazier, Melanie; Longo, Catherine; Halpern, Benjamin S

    2016-01-01

    Indicators are increasingly used to measure environmental systems; however, they are often criticized for failing to measure and describe uncertainty. Uncertainty is particularly difficult to evaluate and communicate in the case of composite indicators which aggregate many indicators of ecosystem condition. One of the ongoing goals of the Ocean Health Index (OHI) has been to improve our approach to dealing with missing data, which is a major source of uncertainty. Here we: (1) quantify the potential influence of gapfilled data on index scores from the 2015 global OHI assessment; (2) develop effective methods of tracking, quantifying, and communicating this information; and (3) provide general guidance for implementing gapfilling procedures for existing and emerging indicators, including regional OHI assessments. For the overall OHI global index score, the percent contribution of gapfilled data was relatively small (18.5%); however, it varied substantially among regions and goals. In general, smaller territorial jurisdictions and the food provision and tourism and recreation goals required the most gapfilling. We found the best approach for managing gapfilled data was to mirror the general framework used to organize, calculate, and communicate the Index data and scores. Quantifying gapfilling provides a measure of the reliability of the scores for different regions and components of an indicator. Importantly, this information highlights the importance of the underlying datasets used to calculate composite indicators and can inform and incentivize future data collection.

  9. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    NASA Astrophysics Data System (ADS)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-05-01

    SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.

  10. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    USGS Publications Warehouse

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-01-01

    The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.

  11. Where do uncertainties reside within environmental risk assessments? Expert opinion on uncertainty distributions for pesticide risks to surface water organisms.

    PubMed

    Skinner, Daniel J C; Rocks, Sophie A; Pollard, Simon J T

    2016-12-01

    A reliable characterisation of uncertainties can aid uncertainty identification during environmental risk assessments (ERAs). However, typologies can be implemented inconsistently, causing uncertainties to go unidentified. We present an approach based on nine structured elicitations, in which subject-matter experts, for pesticide risks to surface water organisms, validate and assess three dimensions of uncertainty: its level (the severity of uncertainty, ranging from determinism to ignorance); nature (whether the uncertainty is epistemic or aleatory); and location (the data source or area in which the uncertainty arises). Risk characterisation contains the highest median levels of uncertainty, associated with estimating, aggregating and evaluating the magnitude of risks. Regarding the locations in which uncertainty is manifest, data uncertainty is dominant in problem formulation, exposure assessment and effects assessment. The comprehensive description of uncertainty described will enable risk analysts to prioritise the required phases, groups of tasks, or individual tasks within a risk analysis according to the highest levels of uncertainty, the potential for uncertainty to be reduced or quantified, or the types of location-based uncertainty, thus aiding uncertainty prioritisation during environmental risk assessments. In turn, it is expected to inform investment in uncertainty reduction or targeted risk management action. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Uncertainty in temperature response of current consumption-based emissions estimates

    NASA Astrophysics Data System (ADS)

    Karstensen, J.; Peters, G. P.; Andrew, R. M.

    2014-09-01

    Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.

  13. Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares

    NASA Technical Reports Server (NTRS)

    Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.

    2012-01-01

    A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.

  14. Treatment of uncertainties in the IPCC: a philosophical analysis

    NASA Astrophysics Data System (ADS)

    Jebeile, J.; Drouet, I.

    2014-12-01

    The IPCC produces scientific reports out of findings on climate and climate change. Because the findings are uncertain in many respects, the production of reports requires aggregating assessments of uncertainties of different kinds. This difficult task is currently regulated by the Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. The note recommends that two metrics—i.e. confidence and likelihood— be used for communicating the degree of certainty in findings. Confidence is expressed qualitatively "based on the type, amount, quality, and consistency of evidence […] and the degree of agreement", while likelihood is expressed probabilistically "based on statistical analysis of observations or model results, or expert judgment". Therefore, depending on the evidence evaluated, authors have the choice to present either an assigned level of confidence or a quantified measure of likelihood. But aggregating assessments of uncertainties of these two different kinds express distinct and conflicting methodologies. So the question arises whether the treatment of uncertainties in the IPCC is rationally justified. In order to answer the question, it is worth comparing the IPCC procedures with the formal normative theories of epistemic rationality which have been developed by philosophers. These theories—which include contributions to the philosophy of probability and to bayesian probabilistic confirmation theory—are relevant for our purpose because they are commonly used to assess the rationality of common collective jugement formation based on uncertain knowledge. In this paper we make the comparison and pursue the following objectives: i/we determine whether the IPCC confidence and likelihood can be compared with the notions of uncertainty targeted by or underlying the formal normative theories of epistemic rationality; ii/we investigate whether the formal normative theories of epistemic rationality justify treating uncertainty along those two dimensions, and indicate how this can be avoided.

  15. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  16. Infrastructure Vulnerability Assessment Model (I-VAM).

    PubMed

    Ezell, Barry Charles

    2007-06-01

    Quantifying vulnerability to critical infrastructure has not been adequately addressed in the literature. Thus, the purpose of this article is to present a model that quantifies vulnerability. Vulnerability is defined as a measure of system susceptibility to threat scenarios. This article asserts that vulnerability is a condition of the system and it can be quantified using the Infrastructure Vulnerability Assessment Model (I-VAM). The model is presented and then applied to a medium-sized clean water system. The model requires subject matter experts (SMEs) to establish value functions and weights, and to assess protection measures of the system. Simulation is used to account for uncertainty in measurement, aggregate expert assessment, and to yield a vulnerability (Omega) density function. Results demonstrate that I-VAM is useful to decisionmakers who prefer quantification to qualitative treatment of vulnerability. I-VAM can be used to quantify vulnerability to other infrastructures, supervisory control and data acquisition systems (SCADA), and distributed control systems (DCS).

  17. Uncertainty in temperature response of current consumption-based emissions estimates

    NASA Astrophysics Data System (ADS)

    Karstensen, J.; Peters, G. P.; Andrew, R. M.

    2015-05-01

    Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.

  18. Uncertainties in Forecasting Streamflow using Entropy Theory

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2017-12-01

    Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.

  19. Spatial and Temporal Uncertainty of Crop Yield Aggregations

    NASA Technical Reports Server (NTRS)

    Porwollik, Vera; Mueller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Iizumi, Toshichika; Ray, Deepak K.; Ruane, Alex C.; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; hide

    2016-01-01

    The aggregation of simulated gridded crop yields to national or regional scale requires information on temporal and spatial patterns of crop-specific harvested areas. This analysis estimates the uncertainty of simulated gridded yield time series related to the aggregation with four different harvested area data sets. We compare aggregated yield time series from the Global Gridded Crop Model Inter-comparison project for four crop types from 14 models at global, national, and regional scale to determine aggregation-driven differences in mean yields and temporal patterns as measures of uncertainty. The quantity and spatial patterns of harvested areas differ for individual crops among the four datasets applied for the aggregation. Also simulated spatial yield patterns differ among the 14 models. These differences in harvested areas and simulated yield patterns lead to differences in aggregated productivity estimates, both in mean yield and in the temporal dynamics. Among the four investigated crops, wheat yield (17% relative difference) is most affected by the uncertainty introduced by the aggregation at the global scale. The correlation of temporal patterns of global aggregated yield time series can be as low as for soybean (r = 0.28).For the majority of countries, mean relative differences of nationally aggregated yields account for10% or less. The spatial and temporal difference can be substantial higher for individual countries. Of the top-10 crop producers, aggregated national multi-annual mean relative difference of yields can be up to 67% (maize, South Africa), 43% (wheat, Pakistan), 51% (rice, Japan), and 427% (soybean, Bolivia).Correlations of differently aggregated yield time series can be as low as r = 0.56 (maize, India), r = 0.05*Corresponding (wheat, Russia), r = 0.13 (rice, Vietnam), and r = -0.01 (soybean, Uruguay). The aggregation to sub-national scale in comparison to country scale shows that spatial uncertainties can cancel out in countries with large harvested areas per crop type. We conclude that the aggregation uncertainty can be substantial for crop productivity and production estimations in the context of food security, impact assessment, and model evaluation exercises.

  20. Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Jeff D.; Wong, Raimond; Swaminath, Anand

    Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less

  1. Securing Healthcare’s Quantified-Self Data: A Comparative Analysis Versus Personal Financial Account Aggregators Based on Porter’s Five Forces Framework for Competitive Force

    DTIC Science & Technology

    2016-09-01

    HEALTHCARE’S QUANTIFIED-SELF DATA: A COMPARATIVE ANALYSIS VERSUS PERSONAL FINANCIAL ACCOUNT AGGREGATORS BASED ON PORTER’S FIVE FORCES FRAMEWORK FOR...TITLE AND SUBTITLE SECURING HEALTHCARE’S QUANTIFIED-SELF DATA: A COMPARATIVE ANALYSIS VERSUS PERSONAL FINANCIAL ACCOUNT AGGREGATORS BASED ON...Distribution is unlimited. SECURING HEALTHCARE’S QUANTIFIED-SELF DATA: A COMPARATIVE ANALYSIS VERSUS PERSONAL FINANCIAL ACCOUNT AGGREGATORS BASED ON

  2. Quantifying uncertainty due to fission-fusion dynamics as a component of social complexity.

    PubMed

    Ramos-Fernandez, Gabriel; King, Andrew J; Beehner, Jacinta C; Bergman, Thore J; Crofoot, Margaret C; Di Fiore, Anthony; Lehmann, Julia; Schaffner, Colleen M; Snyder-Mackler, Noah; Zuberbühler, Klaus; Aureli, Filippo; Boyer, Denis

    2018-05-30

    Groups of animals (including humans) may show flexible grouping patterns, in which temporary aggregations or subgroups come together and split, changing composition over short temporal scales, (i.e. fission and fusion). A high degree of fission-fusion dynamics may constrain the regulation of social relationships, introducing uncertainty in interactions between group members. Here we use Shannon's entropy to quantify the predictability of subgroup composition for three species known to differ in the way their subgroups come together and split over time: spider monkeys ( Ateles geoffroyi ), chimpanzees ( Pan troglodytes ) and geladas ( Theropithecus gelada ). We formulate a random expectation of entropy that considers subgroup size variation and sample size, against which the observed entropy in subgroup composition can be compared. Using the theory of set partitioning, we also develop a method to estimate the number of subgroups that the group is likely to be divided into, based on the composition and size of single focal subgroups. Our results indicate that Shannon's entropy and the estimated number of subgroups present at a given time provide quantitative metrics of uncertainty in the social environment (within which social relationships must be regulated) for groups with different degrees of fission-fusion dynamics. These metrics also represent an indirect quantification of the cognitive challenges posed by socially dynamic environments. Overall, our novel methodological approach provides new insight for understanding the evolution of social complexity and the mechanisms to cope with the uncertainty that results from fission-fusion dynamics. © 2017 The Author(s).

  3. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  4. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  5. Sensitivity tests and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Costa, Antonio; De Natale, Giuseppe; Di Vito, Mauro; Isaia, Roberto; Macedonio, Giovanni

    2017-04-01

    We present the results of a statistical study on tephra dispersion in the case of reactivation of the Campi Flegrei volcano. We considered the full spectrum of possible eruptions, in terms of size and position of eruptive vents. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Of those only three are explosive (small, medium, and large) and can produce a significant quantity of volcanic ash. Hazard assessments are made through dispersion simulations of ash and lapilli, considering the full variability of winds, eruptive vents, and eruptive sizes. The results are presented in form of four families of hazard curves conditioned to the occurrence of an eruption: 1) small eruptive size from any vent; 2) medium eruptive size from any vent; 3) large eruptive size from any vent; 4) any size from any vent. The epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena) on the estimation of hazard curves was quantified making use of alternative scientifically acceptable approaches. The choice of such alternative models is made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of the possible opening of eruptive vents, tephra total grain-size distributions (TGSD), relative mass of fine particles, and the effect of aggregation. The results of this sensitivity analyses show that the dominant uncertainty is related to the choice of TGSD, mass of fine ash, and potential effects of ash aggregation. The latter is particularly relevant in case of magma-water interaction during an eruptive phase, when most of the fine ash can form accretionary lapilli that could contribute significantly in increasing the tephra load in the proximal region. Relatively insignificant is the variability induced by the use of different weather databases. The hazard curves, together with the quantification of epistemic uncertainty, were finally calculated through a statistical model based on ensemble mixing of selected alternative models, e.g. different choices on the estimate of the total erupted mass, mass of fine ash, effects of aggregation, etc. Hazard and probability maps were produced at different confidence levels compared to the epistemic uncertainty (mean, median, 16th percentile, and 84th percentile).

  6. Evaluating uncertainty in environmental life-cycle assessment. A case study comparing two insulation options for a Dutch one-family dwelling.

    PubMed

    Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas

    2003-06-01

    The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.

  7. Mass Uncertainty and Application For Space Systems

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey

    2013-01-01

    Expected development maturity under contract (spec) should correlate with Project/Program Approved MGA Depletion Schedule in Mass Properties Control Plan. If specification NTE, MGA is inclusive of Actual MGA (A5 & A6). If specification is not an NTE Actual MGA (e.g. nominal), then MGA values are reduced by A5 values and A5 is representative of remaining uncertainty. Basic Mass = Engineering Estimate based on design and construction principles with NO embedded margin MGA Mass = Basic Mass * assessed % from approved MGA schedule. Predicted Mass = Basic + MGA. Aggregate MGA % = (Aggregate Predicted - Aggregate Basic) /Aggregate Basic.

  8. The rules of information aggregation and emergence of collective intelligent behavior.

    PubMed

    Bettencourt, Luís M A

    2009-10-01

    Information is a peculiar quantity. Unlike matter and energy, which are conserved by the laws of physics, the aggregation of knowledge from many sources can in fact produce more information (synergy) or less (redundancy) than the sum of its parts. This feature can endow groups with problem-solving strategies that are superior to those possible among noninteracting individuals and, in turn, may provide a selection drive toward collective cooperation and coordination. Here we explore the formal properties of information aggregation as a general principle for explaining features of social organization. We quantify information in terms of the general formalism of information theory, which also prescribes the rules of how different pieces of evidence inform the solution of a given problem. We then show how several canonical examples of collective cognition and coordination can be understood through principles of minimization of uncertainty (maximization of predictability) under information pooling over many individuals. We discuss in some detail how collective coordination in swarms, markets, natural language processing, and collaborative filtering may be guided by the optimal aggregation of information in social collectives. We also identify circumstances when these processes fail, leading, for example, to inefficient markets. The contrast to approaches to understand coordination and collaboration via decision and game theory is also briefly discussed. Copyright © 2009 Cognitive Science Society, Inc.

  9. Drivers and uncertainties of forecasted range shifts for warm-water fishes under climate and land cover change

    USGS Publications Warehouse

    Bouska, Kristen; Whitledge, Gregory W.; Lant, Christopher; Schoof, Justin

    2018-01-01

    Land cover is an important determinant of aquatic habitat and is projected to shift with climate changes, yet climate-driven land cover changes are rarely factored into climate assessments. To quantify impacts and uncertainty of coupled climate and land cover change on warm-water fish species’ distributions, we used an ensemble model approach to project distributions of 14 species. For each species, current range projections were compared to 27 scenario-based projections and aggregated to visualize uncertainty. Multiple regression and model selection techniques were used to identify drivers of range change. Novel, or no-analogue, climates were assessed to evaluate transferability of models. Changes in total probability of occurrence ranged widely across species, from a 63% increase to a 65% decrease. Distributional gains and losses were largely driven by temperature and flow variables and underscore the importance of habitat heterogeneity and connectivity to facilitate adaptation to changing conditions. Finally, novel climate conditions were driven by mean annual maximum temperature, which stresses the importance of understanding the role of temperature on fish physiology and the role of temperature-mitigating management practices.

  10. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  11. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  12. A Regional CO2 Observing System Simulation Experiment Using ASCENDS Observations and WRF-STILT Footprints

    NASA Technical Reports Server (NTRS)

    Wang, James S.; Kawa, S. Randolph; Eluszkiewicz, Janusz; Collatz, G. J.; Mountain, Marikate; Henderson, John; Nehrkorn, Thomas; Aschbrenner, Ryan; Zaccheo, T. Scott

    2012-01-01

    Knowledge of the spatiotemporal variations in emissions and uptake of CO2 is hampered by sparse measurements. The recent advent of satellite measurements of CO2 concentrations is increasing the density of measurements, and the future mission ASCENDS (Active Sensing of CO2 Emissions over Nights, Days and Seasons) will provide even greater coverage and precision. Lagrangian atmospheric transport models run backward in time can quantify surface influences ("footprints") of diverse measurement platforms and are particularly well suited for inverse estimation of regional surface CO2 fluxes at high resolution based on satellite observations. We utilize the STILT Lagrangian particle dispersion model, driven by WRF meteorological fields at 40-km resolution, in a Bayesian synthesis inversion approach to quantify the ability of ASCENDS column CO2 observations to constrain fluxes at high resolution. This study focuses on land-based biospheric fluxes, whose uncertainties are especially large, in a domain encompassing North America. We present results based on realistic input fields for 2007. Pseudo-observation random errors are estimated from backscatter and optical depth measured by the CALIPSO satellite. We estimate a priori flux uncertainties based on output from the CASA-GFED (v.3) biosphere model and make simple assumptions about spatial and temporal error correlations. WRF-STILT footprints are convolved with candidate vertical weighting functions for ASCENDS. We find that at a horizontal flux resolution of 1 degree x 1 degree, ASCENDS observations are potentially able to reduce average weekly flux uncertainties by 0-8% in July, and 0-0.5% in January (assuming an error of 0.5 ppm at the Railroad Valley reference site). Aggregated to coarser resolutions, e.g. 5 degrees x 5 degrees, the uncertainty reductions are larger and more similar to those estimated in previous satellite data observing system simulation experiments.

  13. Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series

    PubMed Central

    Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe

    2017-01-01

    Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709

  14. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  15. Implications of uncertainty on regional CO2 mitigation policies for the U.S. onroad sector based on a high-resolution emissions estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, D.; Gurney, Kevin R.; Geethakumar, Sarath

    2013-04-01

    In this study we present onroad fossil fuel CO2 emissions estimated by the Vulcan Project, an effort quantifying fossil fuel CO2 emissions for the U.S. in high spatial and temporal resolution. This high-resolution data, aggregated at the state-level and classified in broad road and vehicle type categories, is compared to a commonly used national-average approach. We find that the use of national averages incurs state-level biases for road groupings that are almost twice as large as for vehicle groupings. The uncertainty for all groups exceeds the bias, and both quantities are positively correlated with total state emissions. States with themore » largest emissions totals are typically similar to one another in terms of emissions fraction distribution across road and vehicle groups, while smaller-emitting states have a wider range of variation in all groups. Errors in reduction estimates as large as ±60% corresponding to ±0.2 MtC are found for a national-average emissions mitigation strategy focused on a 10% emissions reduction from a single vehicle class, such as passenger gas vehicles or heavy diesel trucks. Recommendations are made for reducing CO2 emissions uncertainty by addressing its main drivers: VMT and fuel efficiency uncertainty.« less

  16. Regional crop yield forecasting: a probabilistic approach

    NASA Astrophysics Data System (ADS)

    de Wit, A.; van Diepen, K.; Boogaard, H.

    2009-04-01

    Information on the outlook on yield and production of crops over large regions is essential for government services dealing with import and export of food crops, for agencies with a role in food relief, for international organizations with a mandate in monitoring the world food production and trade, and for commodity traders. Process-based mechanistic crop models are an important tool for providing such information, because they can integrate the effect of crop management, weather and soil on crop growth. When properly integrated in a yield forecasting system, the aggregated model output can be used to predict crop yield and production at regional, national and continental scales. Nevertheless, given the scales at which these models operate, the results are subject to large uncertainties due to poorly known weather conditions and crop management. Current yield forecasting systems are generally deterministic in nature and provide no information about the uncertainty bounds on their output. To improve on this situation we present an ensemble-based approach where uncertainty bounds can be derived from the dispersion of results in the ensemble. The probabilistic information provided by this ensemble-based system can be used to quantify uncertainties (risk) on regional crop yield forecasts and can therefore be an important support to quantitative risk analysis in a decision making process.

  17. How should epistemic uncertainty in modelling water resources management problems shape evaluations of their operations?

    NASA Astrophysics Data System (ADS)

    Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.

    2017-12-01

    In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.

  18. Aggregate R-R-V Analysis

    EPA Pesticide Factsheets

    The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,

  19. Evaluation of Uncertainties in Measuring Particulate Matter Emission Factors from Atmospheric Fugitive Sources Using Optical Remote Sensing

    NASA Astrophysics Data System (ADS)

    Yuen, W.; Ma, Q.; Du, K.; Koloutsou-Vakakis, S.; Rood, M. J.

    2015-12-01

    Measurements of particulate matter (PM) emissions generated from fugitive sources are of interest in air pollution studies, since such emissions vary widely both spatially and temporally. This research focuses on determining the uncertainties in quantifying fugitive PM emission factors (EFs) generated from mobile vehicles using a vertical scanning micro-pulse lidar (MPL). The goal of this research is to identify the greatest sources of uncertainty of the applied lidar technique in determining fugitive PM EFs, and to recommend methods to reduce the uncertainties in this measurement. The MPL detects the PM plume generated by mobile fugitive sources that are carried downwind to the MPL's vertical scanning plane. Range-resolved MPL signals are measured, corrected, and converted to light extinction coefficients, through inversion of the lidar equation and calculation of the lidar ratio. In this research, both the near-end and far-end lidar equation inversion methods are considered. Range-resolved PM mass concentrations are then determined from the extinction coefficient measurements using the measured mass extinction efficiency (MEE) value, which is an intensive PM property. MEE is determined by collocated PM mass concentration and light extinction measurements, provided respectively by a DustTrak and an open-path laser transmissometer. These PM mass concentrations are then integrated with wind information, duration of plume event, and vehicle distance travelled to obtain fugitive PM EFs. To obtain the uncertainty of PM EFs, uncertainties in MPL signals, lidar ratio, MEE, and wind variation are considered. Error propagation method is applied to each of the above intermediate steps to aggregate uncertainty sources. Results include determination of uncertainties in each intermediate step, and comparison of uncertainties between the use of near-end and far-end lidar equation inversion methods.

  20. A Retrospective Analysis of the Benefits and Impacts of U.S. Renewable Portfolio Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiser, Ryan H.; Barbose, Galen; Heeter, Jenny

    This report, the second in the series, analyzes historical benefits and impacts of all state RPS policies, in aggregate, employing a consistent and well-vetted set of methods and data sets. The analysis focuses on three specific benefits: greenhouse gas emissions, air pollution, and water use. It also analyzes three other impacts: gross job additions, wholesale electricity market price suppression, and natural gas price suppression. These are an important subset, but by no means a comprehensive set, of all possible effects associated with RPS policies. These benefits and impacts are also subject to many uncertainties, which are described and, to themore » extent possible, quantified within the report.« less

  1. Aiding alternatives assessment with an uncertainty-tolerant hazard scoring method.

    PubMed

    Faludi, Jeremy; Hoang, Tina; Gorman, Patrick; Mulvihill, Martin

    2016-11-01

    This research developed a single-score system to simplify and clarify decision-making in chemical alternatives assessment, accounting for uncertainty. Today, assessing alternatives to hazardous constituent chemicals is a difficult task-rather than comparing alternatives by a single definitive score, many independent toxicological variables must be considered at once, and data gaps are rampant. Thus, most hazard assessments are only comprehensible to toxicologists, but business leaders and politicians need simple scores to make decisions. In addition, they must balance hazard against other considerations, such as product functionality, and they must be aware of the high degrees of uncertainty in chemical hazard data. This research proposes a transparent, reproducible method to translate eighteen hazard endpoints into a simple numeric score with quantified uncertainty, alongside a similar product functionality score, to aid decisions between alternative products. The scoring method uses Clean Production Action's GreenScreen as a guide, but with a different method of score aggregation. It provides finer differentiation between scores than GreenScreen's four-point scale, and it displays uncertainty quantitatively in the final score. Displaying uncertainty also illustrates which alternatives are early in product development versus well-defined commercial products. This paper tested the proposed assessment method through a case study in the building industry, assessing alternatives to spray polyurethane foam insulation containing methylene diphenyl diisocyanate (MDI). The new hazard scoring method successfully identified trade-offs between different alternatives, showing finer resolution than GreenScreen Benchmarking. Sensitivity analysis showed that different weighting schemes in hazard scores had almost no effect on alternatives ranking, compared to uncertainty from data gaps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Adjoint-Based Uncertainty Quantification with MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less

  3. Quantitative diagnosis and prognosis framework for concrete degradation due to alkali-silica reaction

    NASA Astrophysics Data System (ADS)

    Mahadevan, Sankaran; Neal, Kyle; Nath, Paromita; Bao, Yanqing; Cai, Guowei; Orme, Peter; Adams, Douglas; Agarwal, Vivek

    2017-02-01

    This research is seeking to develop a probabilistic framework for health diagnosis and prognosis of aging concrete structures in nuclear power plants that are subjected to physical, chemical, environment, and mechanical degradation. The proposed framework consists of four elements: monitoring, data analytics, uncertainty quantification, and prognosis. The current work focuses on degradation caused by ASR (alkali-silica reaction). Controlled concrete specimens with reactive aggregate are prepared to develop accelerated ASR degradation. Different monitoring techniques — infrared thermography, digital image correlation (DIC), mechanical deformation measurements, nonlinear impact resonance acoustic spectroscopy (NIRAS), and vibro-acoustic modulation (VAM) — are studied for ASR diagnosis of the specimens. Both DIC and mechanical measurements record the specimen deformation caused by ASR gel expansion. Thermography is used to compare the thermal response of pristine and damaged concrete specimens and generate a 2-D map of the damage (i.e., ASR gel and cracked area), thus facilitating localization and quantification of damage. NIRAS and VAM are two separate vibration-based techniques that detect nonlinear changes in dynamic properties caused by the damage. The diagnosis results from multiple techniques are then fused using a Bayesian network, which also helps to quantify the uncertainty in the diagnosis. Prognosis of ASR degradation is then performed based on the current state of degradation obtained from diagnosis, by using a coupled thermo-hydro-mechanical-chemical (THMC) model for ASR degradation. This comprehensive approach of monitoring, data analytics, and uncertainty-quantified diagnosis and prognosis will facilitate the development of a quantitative, risk informed framework that will support continuous assessment and risk management of structural health and performance.

  4. Aggregating Hydrometeorological Data from International Monitoring Networks Across Earth's Largest Lake System to Quantify Uncertainty in Historical Water Budget Records, Improve Regional Water Budget Projections, and Differentiate Drivers Behind a Recent Record-Setting Surge in Water Levels

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Bruxer, J.; Smith, J.; Hunter, T.; Fortin, V.; Clites, A. H.; Durnford, D.; Qian, S.; Seglenieks, F.

    2015-12-01

    Resolving and projecting the water budget of the North American Great Lakes basin (Earth's largest lake system) requires aggregation of data from a complex array of in situ monitoring and remote sensing products that cross an international border (leading to potential sources of bias and other inconsistencies), and are relatively sparse over the surfaces of the lakes themselves. Data scarcity over the surfaces of the lakes is a particularly significant problem because, unlike Earth's other large freshwater basins, the Great Lakes basin water budget is (on annual scales) comprised of relatively equal contributions from runoff, over-lake precipitation, and over-lake evaporation. Consequently, understanding drivers behind changes in regional water storage and water levels requires a data management framework that can reconcile uncertainties associated with data scarcity and bias, and propagate those uncertainties into regional water budget projections and historical records. Here, we assess the development of a historical hydrometeorological database for the entire Great Lakes basin with records dating back to the late 1800s, and describe improvements that are specifically intended to differentiate hydrological, climatological, and anthropogenic drivers behind recent extreme changes in Great Lakes water levels. Our assessment includes a detailed analysis of the extent to which extreme cold winters in central North America in 2013-2014 (caused by the anomalous meridional upper air flow - commonly referred to in the public media as the "polar vortex" phenomenon) altered the thermal and hydrologic regimes of the Great Lakes and led to a record setting surge in water levels between January 2014 and December 2015.

  5. Quantifying uncertainty in health impact assessment: a case-study example on indoor housing ventilation.

    PubMed

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2014-01-01

    Quantitative health impact assessment (HIA) is increasingly being used to assess the health impacts attributable to an environmental policy or intervention. As a consequence, there is a need to assess uncertainties in the assessments because of the uncertainty in the HIA models. In this paper, a framework is developed to quantify the uncertainty in the health impacts of environmental interventions and is applied to evaluate the impacts of poor housing ventilation. The paper describes the development of the framework through three steps: (i) selecting the relevant exposure metric and quantifying the evidence of potential health effects of the exposure; (ii) estimating the size of the population affected by the exposure and selecting the associated outcome measure; (iii) quantifying the health impact and its uncertainty. The framework introduces a novel application for the propagation of uncertainty in HIA, based on fuzzy set theory. Fuzzy sets are used to propagate parametric uncertainty in a non-probabilistic space and are applied to calculate the uncertainty in the morbidity burdens associated with three indoor ventilation exposure scenarios: poor, fair and adequate. The case-study example demonstrates how the framework can be used in practice, to quantify the uncertainty in health impact assessment where there is insufficient information to carry out a probabilistic uncertainty analysis. © 2013.

  6. Coseismic slip variation assessed from terrestrial lidar scans of the El Mayor-Cucapah surface rupture

    NASA Astrophysics Data System (ADS)

    Gold, Peter O.; Oskin, Michael E.; Elliott, Austin J.; Hinojosa-Corona, Alejandro; Taylor, Michael H.; Kreylos, Oliver; Cowgill, Eric

    2013-03-01

    We analyze high-resolution (>103 points/m2) terrestrial lidar surveys of the 4 April 2010 El Mayor-Cucapah earthquake rupture (Baja California, Mexico), collected at three sites 12-18 days after the event. Using point cloud-based tools in an immersive visualization environment, we quantify coseismic fault slip for hundreds of meters along strike and construct densely constrained along-strike slip distributions from measurements of offset landforms. Uncertainty bounds for each offset, determined empirically by repeatedly measuring offsets at each site sequentially, illuminate measurement uncertainties that are difficult to quantify in the field. These uncertainties are used to define length scales over which variability in slip distributions may be assumed to reflect either recognizable earthquake mechanisms or measurement noise. At two sites characterized by 2-3 m of concentrated right-oblique slip, repeat measurements yield 2σ uncertainties of ±11-12%. Each site encompasses ∼200 m along strike, and a smoothed linear slip gradient satisfies all measurement distributions, implying along-fault strains of ∼10-3. Conversely, the common practice of defining the slip curve by the local slip maxima distorts the curve, overestimates along-fault strain, and may overestimate actual fault slip by favoring measurements with large, positive, uncertainties. At a third site characterized by 1-2.5 m of diffuse normal slip, repeat measurements of fault throw summed along fault-perpendicular profiles yield 2σ uncertainties of ±17%. Here, a low order polynomial fit through the measurement averages best approximates surface slip. However independent measurements of off-fault strain accommodated by hanging wall flexure suggest that over the ∼200 m length of this site, a linear interpolation through the average values for the slip maxima at either end of this site most accurately represents subsurface displacement. In aggregate, these datasets show that given uncertainties of greater than ±11% (2σ), slip distributions over shorter scales are likely to be less uneven than those derived from a single set of field- or lidar-based measurements. This suggests that the relatively smooth slip curves we obtain over ∼102 m distances reflect real physical phenomena, whereas short wavelength variability over ∼100-101 m distances can be attributed to measurement uncertainty.

  7. Evaluating platelet aggregation dynamics from laser speckle fluctuations.

    PubMed

    Hajjarian, Zeinab; Tshikudi, Diane M; Nadkarni, Seemantini K

    2017-07-01

    Platelets are key to maintaining hemostasis and impaired platelet aggregation could lead to hemorrhage or thrombosis. We report a new approach that exploits laser speckle intensity fluctuations, emanated from a drop of platelet-rich-plasma (PRP), to profile aggregation. Speckle fluctuation rate is quantified by the speckle intensity autocorrelation, g 2 (t) , from which the aggregate size is deduced. We first apply this approach to evaluate polystyrene bead aggregation, triggered by salt. Next, we assess dose-dependent platelet aggregation and inhibition in human PRP spiked with adenosine diphosphate and clopidogrel. Additional spatio-temporal speckle analyses yield 2-dimensional maps of particle displacements to visualize platelet aggregate foci within minutes and quantify aggregation dynamics. These findings demonstrate the unique opportunity for assessing platelet health within minutes for diagnosing bleeding disorders and monitoring anti-platelet therapies.

  8. A Prospective Analysis of the Costs, Benefits, and Impacts of U.S. Renewable Portfolio Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Trieu; Wiser, Ryan; Barbose, Galen

    This is the third in a series of reports exploring the costs, benefits, and other impacts of state renewable portfolio standards (RPS). This report evaluates the effects of renewable electricity used to meet aggregate RPS demand growth prospectively, over the period 2015-2050, under both current RPS policies as well as a potential expansion of those policies. Relying on a well-vetted suite of methods, the report quantifies: the costs to the electric system and retail electricity price impacts; the potential societal benefits associated with reduced greenhouse gas emissions, air pollution emissions, and water use; workforce requirements and economic development effects; andmore » consumer savings associated with reduced natural gas prices. The study quantifies these effects in both physical and monetary terms, where possible, at both national and regional levels, and characterizes key uncertainties. The two prior studies in the series have focused, instead, on the historical costs and on the historical benefits and impacts of state RPS policies.« less

  9. Dealing with uncertainty in landscape genetic resistance models: a case of three co-occurring marsupials.

    PubMed

    Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R

    2016-01-01

    Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.

  10. Evaluating platelet aggregation dynamics from laser speckle fluctuations

    PubMed Central

    Hajjarian, Zeinab; Tshikudi, Diane M.; Nadkarni, Seemantini K.

    2017-01-01

    Platelets are key to maintaining hemostasis and impaired platelet aggregation could lead to hemorrhage or thrombosis. We report a new approach that exploits laser speckle intensity fluctuations, emanated from a drop of platelet-rich-plasma (PRP), to profile aggregation. Speckle fluctuation rate is quantified by the speckle intensity autocorrelation, g2(t), from which the aggregate size is deduced. We first apply this approach to evaluate polystyrene bead aggregation, triggered by salt. Next, we assess dose-dependent platelet aggregation and inhibition in human PRP spiked with adenosine diphosphate and clopidogrel. Additional spatio-temporal speckle analyses yield 2-dimensional maps of particle displacements to visualize platelet aggregate foci within minutes and quantify aggregation dynamics. These findings demonstrate the unique opportunity for assessing platelet health within minutes for diagnosing bleeding disorders and monitoring anti-platelet therapies. PMID:28717586

  11. Impact of Uncertainty from Load-Based Reserves and Renewables on Dispatch Costs and Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bowen; Maroukis, Spencer D.; Lin, Yashen

    2016-11-21

    Aggregations of controllable loads are considered to be a fast-responding, cost-efficient, and environmental-friendly candidate for power system ancillary services. Unlike conventional service providers, the potential capacity from the aggregation is highly affected by factors like ambient conditions and load usage patterns. Previous work modeled aggregations of controllable loads (such as air conditioners) as thermal batteries, which are capable of providing reserves but with uncertain capacity. A stochastic optimal power flow problem was formulated to manage this uncertainty, as well as uncertainty in renewable generation. In this paper, we explore how the types and levels of uncertainty, generation reserve costs, andmore » controllable load capacity affect the dispatch solution, operational costs, and CO2 emissions. We also compare the results of two methods for solving the stochastic optimization problem, namely the probabilistically robust method and analytical reformulation assuming Gaussian distributions. Case studies are conducted on a modified IEEE 9-bus system with renewables, controllable loads, and congestion. We find that different types and levels of uncertainty have significant impacts on dispatch and emissions. More controllable loads and less conservative solution methodologies lead to lower costs and emissions.« less

  12. Quantifying model-structure- and parameter-driven uncertainties in spring wheat phenology prediction with Bayesian analysis

    DOE PAGES

    Alderman, Phillip D.; Stanfill, Bryan

    2016-10-06

    Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less

  13. Do oil shocks predict economic policy uncertainty?

    NASA Astrophysics Data System (ADS)

    Rehman, Mobeen Ur

    2018-05-01

    Oil price fluctuations have influential role in global economic policies for developed as well as emerging countries. I investigate the role of international oil prices disintegrated into structural (i) oil supply shock, (ii) aggregate demand shock and (iii) oil market specific demand shocks, based on the work of Kilian (2009) using structural VAR framework on economic policies uncertainty of sampled markets. Economic policy uncertainty, due to its non-linear behavior is modeled in a regime switching framework with disintegrated structural oil shocks. Our results highlight that Indian, Spain and Japanese economic policy uncertainty responds to the global oil price shocks, however aggregate demand shocks fail to induce any change. Oil specific demand shocks are significant only for China and India in high volatility state.

  14. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  15. Traceable Calibration, Performance Metrics, and Uncertainty Estimates of Minirhizotron Digital Imagery for Fine-Root Measurements

    PubMed Central

    Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023

  16. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  17. Simple uncertainty propagation for early design phase aircraft sizing

    NASA Astrophysics Data System (ADS)

    Lenz, Annelise

    Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the uncertainties quantified" without requiring the systems analyst to have substantial knowledge of probabilistic methods. A semi-empirical sizing study of a small single-engine aircraft serves as an example of an initial version of this simple uncertainty propagation. The same approach is also applied to a variable-fidelity concept study using a NASA-developed transonic Hybrid Wing Body aircraft.

  18. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  19. High-resolution space-time characterization of convective rain cells: implications on spatial aggregation and temporal sampling operated by coarser resolution instruments

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2017-04-01

    Forecasting the occurrence of flash floods and debris flows is fundamental to save lives and protect infrastructures and properties. These natural hazards are generated by high-intensity convective storms, on space-time scales that cannot be properly monitored by conventional instrumentation. Consequently, a number of early-warning systems are nowadays based on remote sensing precipitation observations, e.g. from weather radars or satellites, that proved effective in a wide range of situations. However, the uncertainty affecting rainfall estimates represents an important issue undermining the operational use of early-warning systems. The uncertainty related to remote sensing estimates results from (a) an instrumental component, intrinsic of the measurement operation, and (b) a discretization component, caused by the discretization of the continuous rainfall process. Improved understanding on these sources of uncertainty will provide crucial information to modelers and decision makers. This study aims at advancing knowledge on the (b) discretization component. To do so, we take advantage of an extremely-high resolution X-Band weather radar (60 m, 1 min) recently installed in the Eastern Mediterranean. The instrument monitors a semiarid to arid transition area also covered by an accurate C-Band weather radar and by a relatively sparse rain gauge network ( 1 gauge/ 450 km2). Radar quantitative precipitation estimation includes corrections reducing the errors due to ground echoes, orographic beam blockage and attenuation of the signal in heavy rain. Intense, convection-rich, flooding events recently occurred in the area serve as study cases. We (i) describe with very high detail the spatiotemporal characteristics of the convective cores, and (ii) quantify the uncertainty due to spatial aggregation (spatial discretization) and temporal sampling (temporal discretization) operated by coarser resolution remote sensing instruments. We show that instantaneous rain intensity decreases very steeply with the distance from the core of convection with intensity observed at 1 km (2 km) being 10-40% (1-20%) of the core value. The use of coarser temporal resolutions leads to gaps in the observed rainfall and even relatively high resolutions (5 min) can be affected by the problem. We conclude providing to the final user indications about the effects of the discretization component of estimation uncertainty and suggesting viable ways to decrease them.

  20. Soil moisture data as a constraint for groundwater recharge estimation

    NASA Astrophysics Data System (ADS)

    Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.

    2017-09-01

    Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.

  1. Chapter 8: Uncertainty assessment for quantifying greenhouse gas sources and sinks

    Treesearch

    Jay Breidt; Stephen M. Ogle; Wendy Powers; Coeli Hoover

    2014-01-01

    Quantifying the uncertainty of greenhouse gas (GHG) emissions and reductions from agriculture and forestry practices is an important aspect of decision�]making for farmers, ranchers and forest landowners as the uncertainty range for each GHG estimate communicates our level of confidence that the estimate reflects the actual balance of GHG exchange between...

  2. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  3. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  4. Quantifying the Effects of Historical Land Cover Conversion Uncertainty on Global Carbon and Climate Estimates

    NASA Astrophysics Data System (ADS)

    Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.

    2018-01-01

    Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.

  5. Quantifying the Effects of Historical Land Cover Conversion Uncertainty on Global Carbon and Climate Estimates

    DOE PAGES

    Di Vittorio, A. V.; Mao, J.; Shi, X.; ...

    2018-01-03

    Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less

  6. Quantifying the Effects of Historical Land Cover Conversion Uncertainty on Global Carbon and Climate Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Vittorio, A. V.; Mao, J.; Shi, X.

    Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less

  7. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-01-01

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  8. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-12-31

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  9. Webinar Presentation: Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty

    EPA Pesticide Factsheets

    This presentation, Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty, was given at the STAR Black Carbon 2016 Webinar Series: Changing Chemistry over Time held on Oct. 31, 2016.

  10. Global-mean BC lifetime as an indicator of model skill? Constraining the vertical aerosol distribution using aircraft observations

    NASA Astrophysics Data System (ADS)

    Lund, M. T.; Samset, B. H.; Skeie, R. B.; Berntsen, T.

    2017-12-01

    Several recent studies have used observations from the HIPPO flight campaigns to constrain the modeled vertical distribution of black carbon (BC) over the Pacific. Results indicate a relatively linear relationship between global-mean atmospheric BC residence time, or lifetime, and bias in current models. A lifetime of less than 5 days is necessary for models to reasonably reproduce these observations. This is shorter than what many global models predict, which will in turn affect their estimates of BC climate impacts. Here we use the chemistry-transport model OsloCTM to examine whether this relationship between global BC lifetime and model skill also holds for a broader a set of flight campaigns from 2009-2013 covering both remote marine and continental regions at a range of latitudes. We perform four sets of simulations with varying scavenging efficiency to obtain a spread in the modeled global BC lifetime and calculate the model error and bias for each campaign and region. Vertical BC profiles are constructed using an online flight simulator, as well by averaging and interpolating monthly mean model output, allowing us to quantify sampling errors arising when measurements are compared with model output at different spatial and temporal resolutions. Using the OsloCTM coupled with a microphysical aerosol parameterization, we investigate the sensitivity of modeled BC vertical distribution to uncertainties in the aerosol aging and scavenging processes in more detail. From this, we can quantify how model uncertainties in the BC life cycle propagate into uncertainties in its climate impacts. For most campaigns and regions, a short global-mean BC lifetime corresponds with the lowest model error and bias. On an aggregated level, sampling errors appear to be small, but larger differences are seen in individual regions. However, we also find that model-measurement discrepancies in BC vertical profiles cannot be uniquely attributed to uncertainties in a single process or parameter, at least in this model. Model development therefore needs to focus on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning individual, effective parameters such as global BC lifetime.

  11. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  12. Practical problems in aggregating expert opinions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booker, J.M.; Picard, R.R.; Meyer, M.A.

    1993-11-01

    Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough aboutmore » the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.« less

  13. Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.

    PubMed

    Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F

    Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.

  14. Uncertainty assessment of future land use in Brazil under increasing demand for bioenergy

    NASA Astrophysics Data System (ADS)

    van der Hilst, F.; Verstegen, J. A.; Karssenberg, D.; Faaij, A.

    2013-12-01

    Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatio-temporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, three model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) a model to project the quantity of change of all land uses, and 3) a spatially explicit land use model that determines the location of change of all land uses. All three model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2011 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the locations of pastures are uncertain. Since the dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. Results of the data assimilation indicate that the drivers of location of the land uses vary over time (variations up to 50% in the importance of the drivers) making it difficult to find a solid stationary system representation. Overall, we conclude that projection up to 2030 is only of use for quantifying impacts that act on a larger aggregation level, because at local level uncertainty is too large.

  15. Quantifying parametric uncertainty in the Rothermel model

    Treesearch

    S. Goodrick

    2008-01-01

    The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...

  16. A fuzzy stochastic framework for managing hydro-environmental and socio-economic interactions under uncertainty

    NASA Astrophysics Data System (ADS)

    Subagadis, Yohannes Hagos; Schütze, Niels; Grundmann, Jens

    2014-05-01

    An amplified interconnectedness between a hydro-environmental and socio-economic system brings about profound challenges of water management decision making. In this contribution, we present a fuzzy stochastic approach to solve a set of decision making problems, which involve hydrologically, environmentally, and socio-economically motivated criteria subjected to uncertainty and ambiguity. The proposed methodological framework combines objective and subjective criteria in a decision making procedure for obtaining an acceptable ranking in water resources management alternatives under different type of uncertainty (subjective/objective) and heterogeneous information (quantitative/qualitative) simultaneously. The first step of the proposed approach involves evaluating the performance of alternatives with respect to different types of criteria. The ratings of alternatives with respect to objective and subjective criteria are evaluated by simulation-based optimization and fuzzy linguistic quantifiers, respectively. Subjective and objective uncertainties related to the input information are handled through linking fuzziness and randomness together. Fuzzy decision making helps entail the linguistic uncertainty and a Monte Carlo simulation process is used to map stochastic uncertainty. With this framework, the overall performance of each alternative is calculated using an Order Weighted Averaging (OWA) aggregation operator accounting for decision makers' experience and opinions. Finally, ranking is achieved by conducting pair-wise comparison of management alternatives. This has been done on the basis of the risk defined by the probability of obtaining an acceptable ranking and mean difference in total performance for the pair of management alternatives. The proposed methodology is tested in a real-world hydrosystem, to find effective and robust intervention strategies for the management of a coastal aquifer system affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. The results show that the approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  17. Valuing environmental health for informed policy-making.

    PubMed

    Máca, Vojtěch; Melichar, Jan; Ščasný, Milan; Kohlová, Markéta Braun

    2017-03-01

    Monetized environmental health impact assessments help to better evaluate the environmental burden of a wide range of economic activities. Apart from the limitations and uncertainties in physical and biological science used in such assessments, assumptions taken from economic valuation may also substantially influence subsequent policy-making considerations. This study attempts to demonstrate the impact of normative policy assumptions on quantified external costs using a case study of recently discussed variants of future coal mining and use of extracted coal in electricity and heat generation in the Czech Republic. A bottom-up impact-pathway approach is used for quantification of external costs. Several policy perspectives are elaborated for aggregating impacts that differ in geographic coverage and in how valuation of quantified impacts is adjusted in a particular perspective. We find that the fraction of monetized external impacts taken into policy-making considerations may vary according to choice of decision perspective up to a factor of 10. At present there are virtually no hard rules for defining geographical boundaries or adjusting values for a summation of monetized environmental impacts. We, however, stress that any rigorous external cost assessment should, for instance in a separate calculation, take account of impacts occurring beyond country borders.

  18. Quantifying Aggregation Dynamics during Myxococcus xanthus Development▿†

    PubMed Central

    Zhang, Haiyang; Angus, Stuart; Tran, Michael; Xie, Chunyan; Igoshin, Oleg A.; Welch, Roy D.

    2011-01-01

    Under starvation conditions, a swarm of Myxococcus xanthus cells will undergo development, a multicellular process culminating in the formation of many aggregates called fruiting bodies, each of which contains up to 100,000 spores. The mechanics of symmetry breaking and the self-organization of cells into fruiting bodies is an active area of research. Here we use microcinematography and automated image processing to quantify several transient features of developmental dynamics. An analysis of experimental data indicates that aggregation reaches its steady state in a highly nonmonotonic fashion. The number of aggregates rapidly peaks at a value 2- to 3-fold higher than the final value and then decreases before reaching a steady state. The time dependence of aggregate size is also nonmonotonic, but to a lesser extent: average aggregate size increases from the onset of aggregation to between 10 and 15 h and then gradually decreases thereafter. During this process, the distribution of aggregates transitions from a nearly random state early in development to a more ordered state later in development. A comparison of experimental results to a mathematical model based on the traffic jam hypothesis indicates that the model fails to reproduce these dynamic features of aggregation, even though it accurately describes its final outcome. The dynamic features of M. xanthus aggregation uncovered in this study impose severe constraints on its underlying mechanisms. PMID:21784940

  19. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  20. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  1. Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Aladejare, Adeyemi Emman

    2016-09-01

    Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.

  2. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  3. Uncertainties in stormwater runoff data collection from a small urban catchment, Southeast China.

    PubMed

    Huang, Jinliang; Tu, Zhenshun; Du, Pengfei; Lin, Jie; Li, Qingsheng

    2010-01-01

    Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.

  4. Experimental assessment of aggregate surfacing materials.

    DOT National Transportation Integrated Search

    2007-06-30

    "An extensive suite of geotechnical laboratory tests were conducted to quantify differences in : engineering properties of three crushed aggregates commonly used on Montana highway projects. The : material types are identified in the Montana Suppleme...

  5. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  6. Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldenson, N.; Mauger, G.; Leung, L. R.

    Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less

  7. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  8. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  9. Uncertainty in the Modeling of Tsunami Sediment Transport

    NASA Astrophysics Data System (ADS)

    Jaffe, B. E.; Sugawara, D.; Goto, K.; Gelfenbaum, G. R.; La Selle, S.

    2016-12-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. A recent study (Jaffe et al., 2016) explores sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami properties, study site characteristics, available input data, sediment grain size, and the model used. Although uncertainty has the potential to be large, case studies for both forward and inverse models have shown that sediment transport modeling provides useful information on tsunami inundation and hydrodynamics that can be used to improve tsunami hazard assessment. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and the development of hybrid modeling approaches to exploit the strengths of forward and inverse models. As uncertainty in tsunami sediment transport modeling is reduced, and with increased ability to quantify uncertainty, the geologic record of tsunamis will become more valuable in the assessment of tsunami hazard. Jaffe, B., Goto, K., Sugawara, D., Gelfenbaum, G., and La Selle, S., "Uncertainty in Tsunami Sediment Transport Modeling", Journal of Disaster Research Vol. 11 No. 4, pp. 647-661, 2016, doi: 10.20965/jdr.2016.p0647 https://www.fujipress.jp/jdr/dr/dsstr001100040647/

  10. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  11. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1988-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach was developed to quantify the effects of the random uncertainties. The results indicate that only the variations in geometry have significant effects.

  12. Capturing multi-stage fuzzy uncertainties in hybrid system dynamics and agent-based models for enhancing policy implementation in health systems research.

    PubMed

    Liu, Shiyong; Triantis, Konstantinos P; Zhao, Li; Wang, Youfa

    2018-01-01

    In practical research, it was found that most people made health-related decisions not based on numerical data but on perceptions. Examples include the perceptions and their corresponding linguistic values of health risks such as, smoking, syringe sharing, eating energy-dense food, drinking sugar-sweetened beverages etc. For the sake of understanding the mechanisms that affect the implementations of health-related interventions, we employ fuzzy variables to quantify linguistic variable in healthcare modeling where we employ an integrated system dynamics and agent-based model. In a nonlinear causal-driven simulation environment driven by feedback loops, we mathematically demonstrate how interventions at an aggregate level affect the dynamics of linguistic variables that are captured by fuzzy agents and how interactions among fuzzy agents, at the same time, affect the formation of different clusters(groups) that are targeted by specific interventions. In this paper, we provide an innovative framework to capture multi-stage fuzzy uncertainties manifested among interacting heterogeneous agents (individuals) and intervention decisions that affect homogeneous agents (groups of individuals) in a hybrid model that combines an agent-based simulation model (ABM) and a system dynamics models (SDM). Having built the platform to incorporate high-dimension data in a hybrid ABM/SDM model, this paper demonstrates how one can obtain the state variable behaviors in the SDM and the corresponding values of linguistic variables in the ABM. This research provides a way to incorporate high-dimension data in a hybrid ABM/SDM model. This research not only enriches the application of fuzzy set theory by capturing the dynamics of variables associated with interacting fuzzy agents that lead to aggregate behaviors but also informs implementation research by enabling the incorporation of linguistic variables at both individual and institutional levels, which makes unstructured linguistic data meaningful and quantifiable in a simulation environment. This research can help practitioners and decision makers to gain better understanding on the dynamics and complexities of precision intervention in healthcare. It can aid the improvement of the optimal allocation of resources for targeted group (s) and the achievement of maximum utility. As this technology becomes more mature, one can design policy flight simulators by which policy/intervention designers can test a variety of assumptions when they evaluate different alternatives interventions.

  13. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  14. Web-based access, aggregation, and visualization of future climate projections with emphasis on agricultural assessments

    NASA Astrophysics Data System (ADS)

    Villoria, Nelson B.; Elliott, Joshua; Müller, Christoph; Shin, Jaewoo; Zhao, Lan; Song, Carol

    2018-01-01

    Access to climate and spatial datasets by non-specialists is restricted by technical barriers involving hardware, software and data formats. We discuss an open-source online tool that facilitates downloading the climate data from the global circulation models used by the Inter-Sectoral Impacts Model Intercomparison Project. The tool also offers temporal and spatial aggregation capabilities for incorporating future climate scenarios in applications where spatial aggregation is important. We hope that streamlined access to these data facilitates analysis of climate related issues while considering the uncertainties derived from future climate projections and temporal aggregation choices.

  15. A computational framework for modeling targets as complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  16. Understanding the uncertainty associated with particle-bound pollutant build-up and wash-off: A critical review.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-09-15

    Accurate prediction of stormwater quality is essential for developing effective pollution mitigation strategies. The use of models incorporating simplified mathematical replications of pollutant processes is the common practice for determining stormwater quality. However, an inherent process uncertainty arises due to the intrinsic variability associated with pollutant processes, which has neither been comprehensively understood, nor well accounted for in uncertainty assessment of stormwater quality modelling. This review provides the context for defining and quantifying the uncertainty associated with pollutant build-up and wash-off on urban impervious surfaces based on the hypothesis that particle size is predominant in influencing process variability. Critical analysis of published research literature brings scientific evidence together in order to establish the fact that particle size changes with time, and different sized particles exhibit distinct behaviour during build-up and wash-off, resulting in process variability. Analysis of the different adsorption behaviour of particles confirmed that the variations in pollutant load and composition are influenced by particle size. Particle behaviour and variations in pollutant load and composition are related due to the strong affinity of pollutants such as heavy metals and hydrocarbons for specific particle size ranges. As such, the temporal variation in particle size is identified as the key to establishing a basis for assessing build-up and wash-off process uncertainty. Therefore, accounting for pollutant build-up and wash-off process variability, which is influenced by particle size, would facilitate the assessment of the uncertainty associated with modelling outcomes. Furthermore, the review identified fundamental knowledge gaps where further research is needed in relation to: (1) the aggregation of particles suspended in the atmosphere during build-up; (2) particle re-suspension during wash-off; (3) pollutant re-adsorption by different particle size fractions; and (4) development of evidence-based techniques for assessing uncertainty; and (5) methods for translating the knowledge acquired from the investigation of process mechanisms at small scale into catchment scale for stormwater quality modelling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Uncertainty propagation in q and current profiles derived from motional Stark effect polarimetry on TFTR (abstract)a)

    NASA Astrophysics Data System (ADS)

    Batha, S. H.; Levinton, F. M.; Bell, M. G.; Wieland, R. M.; Hirschman, S. P.

    1995-01-01

    The magnetic-field pitch-angle profile, γp(R)≡arctan(Bpol/Btor), is measured on the TFTR tokamak using a motional Stark effect (MSE) polarimeter. Measured profiles are converted to q profiles with the equilibrium code vmec. Uncertainties in the q profile due to uncertainties in the γp(R), magnetics, and kinetic measurements are quantified. Subsequent uncertainties in the vmec-calculated profiles of current density and shear, both of which are important for stability and transport analyses, are also quantified. Examples of circular plasmas under various confinement modes, including the supershot and L mode, will be given.

  18. Quantifying uncertainties in the structural response of SSME blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1987-01-01

    To quantify the uncertainties associated with the geometry and material properties of a Space Shuttle Main Engine (SSME) turbopump blade, a computer code known as STAEBL was used. A finite element model of the blade used 80 triangular shell elements with 55 nodes and five degrees of freedom per node. The whole study was simulated on the computer and no real experiments were conducted. The structural response has been evaluated in terms of three variables which are natural frequencies, root (maximum) stress, and blade tip displacements. The results of the study indicate that only the geometric uncertainties have significant effects on the response. Uncertainties in material properties have insignificant effects.

  19. The US-DOE ARM/ASR Effort in Quantifying Uncertainty in Ground-Based Cloud Property Retrievals (Invited)

    NASA Astrophysics Data System (ADS)

    Xie, S.; Protat, A.; Zhao, C.

    2013-12-01

    One primary goal of the US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program is to obtain and retrieve cloud microphysical properties from detailed cloud observations using ground-based active and passive remote sensors. However, there is large uncertainty in the retrieved cloud property products. Studies have shown that the uncertainty could arise from instrument limitations, measurement errors, sampling errors, retrieval algorithm deficiencies in assumptions, as well as inconsistent input data and constraints used by different algorithms. To quantify the uncertainty in cloud retrievals, a scientific focus group, Quantification of Uncertainties In Cloud Retrievals (QUICR), was recently created by the DOE Atmospheric System Research (ASR) program. This talk will provide an overview of the recent research activities conducted within QUICR and discuss its current collaborations with the European cloud retrieval community and future plans. The goal of QUICR is to develop a methodology for characterizing and quantifying uncertainties in current and future ARM cloud retrievals. The Work at LLNL was performed under the auspices of the U. S. Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-ABS-641258.

  20. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  1. Aggregate Measures of Watershed Health from Reconstructed ...

    EPA Pesticide Factsheets

    Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v

  2. Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?

    NASA Astrophysics Data System (ADS)

    Harmon, M. E.

    2016-12-01

    There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.

  3. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    NASA Astrophysics Data System (ADS)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  4. Soil compaction: Evaluation of stress transmission and resulting soil structure

    NASA Astrophysics Data System (ADS)

    Naveed, Muhammad; Schjønning, Per; Keller, Thomas; Lamande, Mathieu

    2016-04-01

    Accurate estimation of stress transmission and resultant deformation in soil profiles is a prerequisite for the development of predictive models and decision support tools for preventing soil compaction. Numerous studies have been carried out on the effects of soil compaction, whilst relatively few studies have focused on the cause (mode of stress transmission in the soil). We have coupled both cause and effects together in the present study by carrying out partially confined compression tests on (1) wet aggregates, (2) air dry aggregates, and (3) intact soils to quantify stress transmission and compaction-resulted soil structure at the same time. Stress transmission was quantified using both X-ray CT and Tactilus sensor mat, and soil-pore structure was quantified using X-ray CT. Our results imply that stress transmission through soil highly depends on the magnitude of applied load and aggregate strength. As soon as the applied load is lower than the aggregate strength, the mode of stress transmission is discrete as stresses were mainly transmitted through chain of aggregates. With increasing applied load soil aggregates start deforming that transformed heterogeneous soil into homogenous, as a result stress transmission mode was shifted from discrete towards more like a continuum. Continuum-like stress transmission mode was better simulated with Boussinesq (1885) model based on theory of elasticity compared to discrete. The soil-pore structure was greatly affected by increasing applied stresses. Total porosity was reduced 5-16% and macroporosity 50-85% at 620 kPa applied stress for the intact soils. Similarly, significant changes in the morphological indices of the macropore space were also observed with increasing applied stresses.

  5. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    NASA Astrophysics Data System (ADS)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  6. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  7. A Novel Method to Quantify Soil Aggregate Stability by Measuring Aggregate Bond Energies

    NASA Astrophysics Data System (ADS)

    Efrat, Rachel; Rawlins, Barry G.; Quinton, John N.; Watts, Chris W.; Whitmore, Andy P.

    2016-04-01

    Soil aggregate stability is a key indicator of soil quality because it controls physical, biological and chemical functions important in cultivated soils. Micro-aggregates are responsible for the long term sequestration of carbon in soil, therefore determine soils role in the carbon cycle. It is thus vital that techniques to measure aggregate stability are accurate, consistent and reliable, in order to appropriately manage and monitor soil quality, and to develop our understanding and estimates of soil as a carbon store to appropriately incorporate in carbon cycle models. Practices used to assess the stability of aggregates vary in sample preparation, operational technique and unit of results. They use proxies and lack quantification. Conflicting results are therefore drawn between projects that do not provide methodological or resultant comparability. Typical modern stability tests suspend aggregates in water and monitor fragmentation upon exposure to an un-quantified amount of ultrasonic energy, utilising a laser granulometer to measure the change in mean weight diameter. In this project a novel approach has been developed based on that of Zhu et al., (2009), to accurately quantify the stability of aggregates by specifically measuring their bond energies. The bond energies are measured operating a combination of calorimetry and a high powered ultrasonic probe, with computable output function. Temperature change during sonication is monitored by an array of probes which enables calculation of the energy spent heating the system (Ph). Our novel technique suspends aggregates in heavy liquid lithium heteropolytungstate, as opposed to water, to avoid exposing aggregates to an immeasurable disruptive energy source, due to cavitation, collisions and clay swelling. Mean weight diameter is measured by a laser granulometer to monitor aggregate breakdown after successive periods of calculated ultrasonic energy input (Pi), until complete dispersion is achieved and bond energy (Pb; input energy used in aggregate breakdown) can be calculated by the following equation: ΣPi - Ph = Pb The novel technique was tested by comparing the bond energies measured from a series of soil aggregates sampled from different land management histories, to the samples corresponding stability measurement obtained from standard modern stability tests. The effectiveness of the heavy liquid as a suspension (as opposed to water) was evaluated by comparing the bond energies of samples measured in both suspensions. Our results determine i) how disruptive water is in aggregate stability tests, ii) how accurate and representative standard aggregate stability tests are, and iii) how bond strength varies depending on land use. Keywords: Aggregate; Bond; Fragmentation; Soil; Sonication; Stability References: Zhu, Z. L., Minasny, B. & Field D. J. 2009. Measurement of aggregate bond energy using ultrasonic dispersion. European Journal of Soil Science, 60, 695-705

  8. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  9. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that the effect of uncertainties associated with the geostatistical parameters on the spatial prediction might be significantly alleviated (by up to 80% of the prior uncertainty in K and by 90% of the prior uncertainty in H) by sampling evenly distributed measurements with a spatial measurement density of more than 1 observation per 60 m × 60 m grid block. In addition, exploration of the interaction of objective functions indicates that the ability of head measurements to reduce the uncertainty associated with the correlation scale is comparable to the effect of hydraulic conductivity measurements.

  10. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  11. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  12. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    NASA Technical Reports Server (NTRS)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  13. Measurement of photon indistinguishability to a quantifiable uncertainty using a Hong-Ou-Mandel interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Peter J.; Cheung, Jessica Y.; Chunnilall, Christopher J.

    2010-04-10

    We present a method for using the Hong-Ou-Mandel (HOM) interference technique to quantify photon indistinguishability within an associated uncertainty. The method allows the relative importance of various experimental factors affecting the HOM visibility to be identified, and enables the actual indistinguishability, with an associated uncertainty, to be estimated from experimentally measured quantities. A measurement equation has been derived that accounts for the non-ideal performance of the interferometer. The origin of each term of the equation is explained, along with procedures for their experimental evaluation and uncertainty estimation. These uncertainties are combined to give an overall uncertainty for the derived photonmore » indistinguishability. The analysis was applied to measurements from an interferometer sourced with photon pairs from a parametric downconversion process. The measured photon indistinguishably was found to be 0.954+/-0.036 by using the prescribed method.« less

  14. Quantifying uncertainty in forest nutrient budgets

    Treesearch

    Ruth D. Yanai; Carrie R. Levine; Mark B. Green; John L. Campbell

    2012-01-01

    Nutrient budgets for forested ecosystems have rarely included error analysis, in spite of the importance of uncertainty to interpretation and extrapolation of the results. Uncertainty derives from natural spatial and temporal variation and also from knowledge uncertainty in measurement and models. For example, when estimating forest biomass, researchers commonly report...

  15. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.

  16. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.

  17. Label-free DNA quantification via a 'pipette, aggregate and blot' (PAB) approach with magnetic silica particles on filter paper.

    PubMed

    Li, Jingyi; Liu, Qian; Alsamarri, Hussein; Lounsbury, Jenny A; Haversitick, Doris M; Landers, James P

    2013-03-07

    Reliable measurement of DNA concentration is essential for a broad range of applications in biology and molecular biology, and for many of these, quantifying the nucleic acid content is inextricably linked to obtaining optimal results. In its most simplistic form, quantitative analysis of nucleic acids can be accomplished by UV-Vis absorbance and, in more sophisticated format, by fluorimetry. A recently reported new concept, the 'pinwheel assay', involves a label-free approach for quantifying DNA through aggregation of paramagnetic beads in a rotating magnetic field. Here, we describe a simplified version of that assay adapted for execution using only a pipet and filter paper. The 'pipette, aggregate, and blot' (PAB) approach allows DNA to induce bead aggregation in a pipette tip through exposure to a magnetic field, followed by dispensing (blotting) onto filter paper. The filter paper immortalises the extent of aggregation, and digital images of the immortalized bead conformation, acquired with either a document scanner or a cell phone camera, allows for DNA quantification using a noncomplex algorithm. Human genomic DNA samples extracted from blood are quantified with the PAB approach and the results utilized to define the volume of sample used in a PCR reaction that is sensitive to input mass of template DNA. Integrating the PAB assay with paper-based DNA extraction and detection modalities has the potential to yield 'DNA quant-on-paper' devices that may be useful for point-of-care testing.

  18. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  19. Sources of uncertainty in flood inundation maps

    USGS Publications Warehouse

    Bales, J.D.; Wagner, C.R.

    2009-01-01

    Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.

  20. Ensembles vs. information theory: supporting science under uncertainty

    NASA Astrophysics Data System (ADS)

    Nearing, Grey S.; Gupta, Hoshin V.

    2018-05-01

    Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.

  1. Quantifying the benefits of improved rolling of chip seals : final report, June 2008.

    DOT National Transportation Integrated Search

    2008-06-01

    This report presents an improvement in the rolling protocol for chip seals based on an evaluation of aggregate : retention performance and aggregate embedment depth. The flip-over test (FOT), Vialit test, modified sand circle : test, digital image pr...

  2. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    NASA Astrophysics Data System (ADS)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  3. Modeling spatial-temporal dynamics of global wetlands: Comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zimmermann, N. E.; Poulter, B.

    2015-12-01

    Simulations of the spatial-temporal dynamics of wetlands is key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate global wetland dynamics. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl DGVM, and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. We found that calibrating TOPMODEL with a benchmark dataset can help to successfully predict the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetland among three DEM products. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlight the importance of an adequate understanding of topographic indices for simulating global wetlands and show the opportunity to converge wetland estimations in LSMs by identifying the uncertainty associated with existing wetland products.

  4. Answering Aggregation Questions in Contingency Valuation of Rural Transit Benefits

    DOT National Transportation Integrated Search

    2001-08-01

    While the qualitative benefits of transit are relatively well known, quantifying the benefits of transit is still a developing methodology. Quantifying benefits offers improved operational management and planning as well as better information for pol...

  5. What might we learn from climate forecasts?

    PubMed Central

    Smith, Leonard A.

    2002-01-01

    Most climate models are large dynamical systems involving a million (or more) variables on big computers. Given that they are nonlinear and not perfect, what can we expect to learn from them about the earth's climate? How can we determine which aspects of their output might be useful and which are noise? And how should we distribute resources between making them “better,” estimating variables of true social and economic interest, and quantifying how good they are at the moment? Just as “chaos” prevents accurate weather forecasts, so model error precludes accurate forecasts of the distributions that define climate, yielding uncertainty of the second kind. Can we estimate the uncertainty in our uncertainty estimates? These questions are discussed. Ultimately, all uncertainty is quantified within a given modeling paradigm; our forecasts need never reflect the uncertainty in a physical system. PMID:11875200

  6. Investigation of laboratory test procedures for assessing the structural capacity of geogrid-reinforced aggregate base materials.

    DOT National Transportation Integrated Search

    2015-04-01

    The objective of this research was to identify a laboratory test method that can be used to quantify improvements in structural capacity of aggregate base materials reinforced with geogrid. For this research, National Cooperative Highway Research Pro...

  7. REFINED PBPK MODEL OF AGGREGATE EXPOSURE TO METHYL TERTIARY-BUTYL ETHER

    EPA Science Inventory

    Aggregate (multiple pathway) exposures to methyl tertiary-butyl ether (MTBE) in air and water occur via dermal, inhalation, and oral routes. Previously, physiologically-based pharmacokinetic (PBPK) models have been used to quantify the kinetic behavior of MTBE and its primary met...

  8. Carbon Monitoring System Flux Estimation and Attribution: Impact of ACOS-GOSAT X(CO2) Sampling on the Inference of Terrestrial Biospheric Sources and Sinks

    NASA Technical Reports Server (NTRS)

    Liu, Junjie; Bowman, Kevin W.; Lee, Memong; Henze, David K.; Bousserez, Nicolas; Brix, Holger; Collatz, G. James; Menemenlis, Dimitris; Ott, Lesley; Pawson, Steven; hide

    2014-01-01

    Using an Observing System Simulation Experiment (OSSE), we investigate the impact of JAXA Greenhouse gases Observing SATellite 'IBUKI' (GOSAT) sampling on the estimation of terrestrial biospheric flux with the NASA Carbon Monitoring System Flux (CMS-Flux) estimation and attribution strategy. The simulated observations in the OSSE use the actual column carbon dioxide (X(CO2)) b2.9 retrieval sensitivity and quality control for the year 2010 processed through the Atmospheric CO2 Observations from Space algorithm. CMS-Flux is a variational inversion system that uses the GEOS-Chem forward and adjoint model forced by a suite of observationally constrained fluxes from ocean, land and anthropogenic models. We investigate the impact of GOSAT sampling on flux estimation in two aspects: 1) random error uncertainty reduction and 2) the global and regional bias in posterior flux resulted from the spatiotemporally biased GOSAT sampling. Based on Monte Carlo calculations, we find that global average flux uncertainty reduction ranges from 25% in September to 60% in July. When aggregated to the 11 land regions designated by the phase 3 of the Atmospheric Tracer Transport Model Intercomparison Project, the annual mean uncertainty reduction ranges from 10% over North American boreal to 38% over South American temperate, which is driven by observational coverage and the magnitude of prior flux uncertainty. The uncertainty reduction over the South American tropical region is 30%, even with sparse observation coverage. We show that this reduction results from the large prior flux uncertainty and the impact of non-local observations. Given the assumed prior error statistics, the degree of freedom for signal is approx.1132 for 1-yr of the 74 055 GOSAT X(CO2) observations, which indicates that GOSAT provides approx.1132 independent pieces of information about surface fluxes. We quantify the impact of GOSAT's spatiotemporally sampling on the posterior flux, and find that a 0.7 gigatons of carbon bias in the global annual posterior flux resulted from the seasonally and diurnally biased sampling when using a diagonal prior flux error covariance.

  9. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  10. Quantitative analysis of in situ optical diagnostics for inferring particle/aggregate parameters in flames: Implications for soot surface growth and total emissivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeylue, U.O.

    1997-05-01

    An in situ particulate diagnostic/analysis technique is outlined based on the Rayleigh-Debye-Gans polydisperse fractal aggregate (RDG/PFA) scattering interpretation of absolute angular light scattering and extinction measurements. Using proper particle refractive index, the proposed data analysis method can quantitatively yield all aggregate parameters (particle volume fraction, f{sub v}, fractal dimension, D{sub f}, primary particle diameter, d{sub p}, particle number density, n{sub p}, and aggregate size distribution, pdf(N)) without any prior knowledge about the particle-laden environment. The present optical diagnostic/interpretation technique was applied to two different soot-containing laminar and turbulent ethylene/air nonpremixed flames in order to assess its reliability. The aggregate interpretationmore » of optical measurements yielded D{sub f}, d{sub p}, and pdf(N) that are in excellent agreement with ex situ thermophoretic sampling/transmission electron microscope (TS/TEM) observations within experimental uncertainties. However, volume-equivalent single particle models (Rayleigh/Mie) overestimated d{sub p} by about a factor of 3, causing an order of magnitude underestimation in n{sub p}. Consequently, soot surface areas and growth rates were in error by a factor of 3, emphasizing that aggregation effects need to be taken into account when using optical diagnostics for a reliable understanding of soot formation/evolution mechanism in flames. The results also indicated that total soot emissivities were generally underestimated using Rayleigh analysis (up to 50%), mainly due to the uncertainties in soot refractive indices at infrared wavelengths. This suggests that aggregate considerations may not be essential for reasonable radiation heat transfer predictions from luminous flames because of fortuitous error cancellation, resulting in typically a 10 to 30% net effect.« less

  11. Managing uncertainty in flood protection planning with climate projections

    NASA Astrophysics Data System (ADS)

    Dittes, Beatrice; Špačková, Olga; Schoppa, Lukas; Straub, Daniel

    2018-04-01

    Technical flood protection is a necessary part of integrated strategies to protect riverine settlements from extreme floods. Many technical flood protection measures, such as dikes and protection walls, are costly to adapt after their initial construction. This poses a challenge to decision makers as there is large uncertainty in how the required protection level will change during the measure lifetime, which is typically many decades long. Flood protection requirements should account for multiple future uncertain factors: socioeconomic, e.g., whether the population and with it the damage potential grows or falls; technological, e.g., possible advancements in flood protection; and climatic, e.g., whether extreme discharge will become more frequent or not. This paper focuses on climatic uncertainty. Specifically, we devise methodology to account for uncertainty associated with the use of discharge projections, ultimately leading to planning implications. For planning purposes, we categorize uncertainties as either visible, if they can be quantified from available catchment data, or hidden, if they cannot be quantified from catchment data and must be estimated, e.g., from the literature. It is vital to consider the hidden uncertainty, since in practical applications only a limited amount of information (e.g., a finite projection ensemble) is available. We use a Bayesian approach to quantify the visible uncertainties and combine them with an estimate of the hidden uncertainties to learn a joint probability distribution of the parameters of extreme discharge. The methodology is integrated into an optimization framework and applied to a pre-alpine case study to give a quantitative, cost-optimal recommendation on the required amount of flood protection. The results show that hidden uncertainty ought to be considered in planning, but the larger the uncertainty already present, the smaller the impact of adding more. The recommended planning is robust to moderate changes in uncertainty as well as in trend. In contrast, planning without consideration of bias and dependencies in and between uncertainty components leads to strongly suboptimal planning recommendations.

  12. Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations

    PubMed Central

    2017-01-01

    Red blood cell (RBC) aggregation and erythrocyte sedimentation rate (ESR) are considered to be promising biomarkers for effectively monitoring blood rheology at extremely low shear rates. In this study, a microfluidic-based measurement technique is suggested to evaluate RBC aggregation under hematocrit variations due to the continuous ESR. After the pipette tip is tightly fitted into an inlet port, a disposable suction pump is connected to the outlet port through a polyethylene tube. After dropping blood (approximately 0.2 mL) into the pipette tip, the blood flow can be started and stopped by periodically operating a pinch valve. To evaluate variations in RBC aggregation due to the continuous ESR, an EAI (Erythrocyte-sedimentation-rate Aggregation Index) is newly suggested, which uses temporal variations of image intensity. To demonstrate the proposed method, the dynamic characterization of the disposable suction pump is first quantitatively measured by varying the hematocrit levels and cavity volume of the suction pump. Next, variations in RBC aggregation and ESR are quantified by varying the hematocrit levels. The conventional aggregation index (AI) is maintained constant, unrelated to the hematocrit values. However, the EAI significantly decreased with respect to the hematocrit values. Thus, the EAI is more effective than the AI for monitoring variations in RBC aggregation due to the ESR. Lastly, the proposed method is employed to detect aggregated blood and thermally-induced blood. The EAI gradually increased as the concentration of a dextran solution increased. In addition, the EAI significantly decreased for thermally-induced blood. From this experimental demonstration, the proposed method is able to effectively measure variations in RBC aggregation due to continuous hematocrit variations, especially by quantifying the EAI. PMID:28878199

  13. Assessing uncertain human exposure to ambient air pollution using environmental models in the Web

    NASA Astrophysics Data System (ADS)

    Gerharz, L. E.; Pebesma, E.; Denby, B.

    2012-04-01

    Ambient air quality can have significant impact on human health by causing respiratory and cardio-vascular diseases. Thereby, the pollutant concentration a person is exposed to can differ considerably between individuals depending on their daily routine and movement patterns. Using a straight forward approach this exposure can be estimated by integration of individual space-time paths and spatio-temporally resolved ambient air quality data. To allow a realistic exposure assessment, it is furthermore important to consider uncertainties due to input and model errors. In this work, we present a generic, web-based approach for estimating individual exposure by integration of uncertain position and air quality information implemented as a web service. Following the Model Web initiative envisioning an infrastructure for deploying, executing and chaining environmental models as services, existing models and data sources for e.g. air quality, can be used to assess exposure. Therefore, the service needs to deal with different formats, resolutions and uncertainty representations provided by model or data services. Potential mismatch can be accounted for by transformation of uncertainties and (dis-)aggregation of data under consideration of changes in the uncertainties using components developed in the UncertWeb project. In UncertWeb, the Model Web vision is extended to an Uncertainty-enabled Model Web, where services can process and communicate uncertainties in the data and models. The propagation of uncertainty to the exposure results is quantified using Monte Carlo simulation by combining different realisations of positions and ambient concentrations. Two case studies were used to evaluate the developed exposure assessment service. In a first study, GPS tracks with a positional uncertainty of a few meters, collected in the urban area of Münster, Germany were used to assess exposure to PM10 (particulate matter smaller 10 µm). Air quality data was provided by an uncertainty-enabled air quality model system which provided realisations of concentrations per hour on a 250 m x 250 m resolved grid over Münster. The second case study uses modelled human trajectories in Rotterdam, The Netherlands. The trajectories were provided as realisations in 15 min resolution per 4 digit postal code from an activity model. Air quality estimates were provided for different pollutants as ensembles by a coupled meteorology and air quality model system on a 1 km x 1 km grid with hourly resolution. Both case studies show the successful application of the service to different resolutions and uncertainty representations.

  14. Accounting for inter-annual and seasonal variability in regionalization of hydrologic response in the Great Lakes basin

    NASA Astrophysics Data System (ADS)

    Kult, J. M.; Fry, L. M.; Gronewold, A. D.

    2012-12-01

    Methods for predicting streamflow in areas with limited or nonexistent measures of hydrologic response typically invoke the concept of regionalization, whereby knowledge pertaining to gauged catchments is transferred to ungauged catchments. In this study, we identify watershed physical characteristics acting as primary drivers of hydrologic response throughout the US portion of the Great Lakes basin. Relationships between watershed physical characteristics and hydrologic response are generated from 166 catchments spanning a variety of climate, soil, land cover, and land form regimes through regression tree analysis, leading to a grouping of watersheds exhibiting similar hydrologic response characteristics. These groupings are then used to predict response in ungauged watersheds in an uncertainty framework. Results from this method are assessed alongside one historical regionalization approach which, while simple, has served as a cornerstone of Great Lakes regional hydrologic research for several decades. Our approach expands upon previous research by considering multiple temporal characterizations of hydrologic response. Due to the substantial inter-annual and seasonal variability in hydrologic response observed over the Great Lakes basin, results from the regression tree analysis differ considerably depending on the level of temporal aggregation used to define the response. Specifically, higher levels of temporal aggregation for the response metric (for example, indices derived from long-term means of climate and streamflow observations) lead to improved watershed groupings with lower within-group variance. However, this perceived improvement in model skill occurs at the cost of understated uncertainty when applying the regression to time series simulations or as a basis for model calibration. In such cases, our results indicate that predictions based on long-term characterizations of hydrologic response can produce misleading conclusions when applied at shorter time steps. This study suggests that measures of hydrologic response quantified at these shorter time steps may provide a more robust basis for making predictions in applications of water resource management, model calibration and simulations, and human health and safety.

  15. Using new aggregation operators in rule-based intelligent control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.

    1990-01-01

    A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.

  16. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  17. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    NASA Astrophysics Data System (ADS)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.

    2017-07-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.

  18. Selection of climate policies under the uncertainties in the Fifth Assessment Report of the IPCC

    NASA Astrophysics Data System (ADS)

    Drouet, L.; Bosetti, V.; Tavoni, M.

    2015-10-01

    Strategies for dealing with climate change must incorporate and quantify all the relevant uncertainties, and be designed to manage the resulting risks. Here we employ the best available knowledge so far, summarized by the three working groups of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5; refs , , ), to quantify the uncertainty of mitigation costs, climate change dynamics, and economic damage for alternative carbon budgets. We rank climate policies according to different decision-making criteria concerning uncertainty, risk aversion and intertemporal preferences. Our findings show that preferences over uncertainties are as important as the choice of the widely discussed time discount factor. Climate policies consistent with limiting warming to 2 °C above preindustrial levels are compatible with a subset of decision-making criteria and some model parametrizations, but not with the commonly adopted expected utility framework.

  19. Quantifying the measurement uncertainty of results from environmental analytical methods.

    PubMed

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  20. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  1. Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-08-07

    The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.

  2. Uncertainty in simulating wheat yields under climate change

    NASA Astrophysics Data System (ADS)

    Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.

    2013-09-01

    Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.

  3. A web-application for visualizing uncertainty in numerical ensemble models

    NASA Astrophysics Data System (ADS)

    Alberti, Koko; Hiemstra, Paul; de Jong, Kor; Karssenberg, Derek

    2013-04-01

    Numerical ensemble models are used in the analysis and forecasting of a wide range of environmental processes. Common use cases include assessing the consequences of nuclear accidents, pollution releases into the ocean or atmosphere, forest fires, volcanic eruptions, or identifying areas at risk from such hazards. In addition to the increased use of scenario analyses and model forecasts, the availability of supplementary data describing errors and model uncertainties is increasingly commonplace. Unfortunately most current visualization routines are not capable of properly representing uncertain information. As a result, uncertainty information is not provided at all, not readily accessible, or it is not communicated effectively to model users such as domain experts, decision makers, policy makers, or even novice users. In an attempt to address these issues a lightweight and interactive web-application has been developed. It makes clear and concise uncertainty visualizations available in a web-based mapping and visualization environment, incorporating aggregation (upscaling) techniques to adjust uncertainty information to the zooming level. The application has been built on a web mapping stack of open source software, and can quantify and visualize uncertainties in numerical ensemble models in such a way that both expert and novice users can investigate uncertainties present in a simple ensemble dataset. As a test case, a dataset was used which forecasts the spread of an airborne tracer across Western Europe. Extrinsic uncertainty representations are used in which dynamic circular glyphs are overlaid on model attribute maps to convey various uncertainty concepts. It supports both basic uncertainty metrics such as standard deviation, standard error, width of the 95% confidence interval and interquartile range, as well as more experimental ones aimed at novice users. Ranges of attribute values can be specified, and the circular glyphs dynamically change size to represent the probability of the attribute value falling within the specified interval. For more advanced users graphs of the cumulative probability density function, histograms, and time series plume charts are available. To avoid risking a cognitive overload and crowding of glyphs on the map pane, the support of the data used for generating the glyphs is linked dynamically to the zoom level. Zooming in and out respectively decreases and increases the underlying support size of data used for generating the glyphs, thereby making uncertainty information of the original data upscaled to the resolution of the visualization accessible to the user. This feature also ensures that the glyphs are neatly spaced in a regular grid regardless of the zoom level. Finally, the web-application has been presented to groups of test users of varying degrees of expertise in order to evaluate the usability of the interface and the effectiveness of uncertainty visualizations based on circular glyphs.

  4. Uncertainty in hydrological signatures for gauged and ungauged catchments

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida K.; Wagener, Thorsten; Coxon, Gemma; McMillan, Hilary K.; Castellarin, Attilio; Montanari, Alberto; Freer, Jim

    2016-03-01

    Reliable information about hydrological behavior is needed for water-resource management and scientific investigations. Hydrological signatures quantify catchment behavior as index values, and can be predicted for ungauged catchments using a regionalization procedure. The prediction reliability is affected by data uncertainties for the gauged catchments used in prediction and by uncertainties in the regionalization procedure. We quantified signature uncertainty stemming from discharge data uncertainty for 43 UK catchments and propagated these uncertainties in signature regionalization, while accounting for regionalization uncertainty with a weighted-pooling-group approach. Discharge uncertainty was estimated using Monte Carlo sampling of multiple feasible rating curves. For each sampled rating curve, a discharge time series was calculated and used in deriving the gauged signature uncertainty distribution. We found that the gauged uncertainty varied with signature type, local measurement conditions and catchment behavior, with the highest uncertainties (median relative uncertainty ±30-40% across all catchments) for signatures measuring high- and low-flow magnitude and dynamics. Our regionalization method allowed assessing the role and relative magnitudes of the gauged and regionalized uncertainty sources in shaping the signature uncertainty distributions predicted for catchments treated as ungauged. We found that (1) if the gauged uncertainties were neglected there was a clear risk of overconditioning the regionalization inference, e.g., by attributing catchment differences resulting from gauged uncertainty to differences in catchment behavior, and (2) uncertainty in the regionalization results was lower for signatures measuring flow distribution (e.g., mean flow) than flow dynamics (e.g., autocorrelation), and for average flows (and then high flows) compared to low flows.

  5. Development of image analysis techniques as a tool to detect and quantify morphological changes in anaerobic sludge: I. Application to a granulation process.

    PubMed

    Araya-Kroff, P; Amaral, A L; Neves, L; Ferreira, E C; Pons, M-N; Mota, M; Alves, M M

    2004-07-20

    Image analysis techniques were developed and applied to quantify the process of anaerobic granulation in an expanded granular sludge blanket reactor (EGSB) fed with a synthetic substrate based on glucose [60-30% COD (chemical oxygen demand)] and volatile fatty acids (40-70% COD) over 376 days. In a first operation period that lasted 177 days, the aggregation of dispersed sludge was quantitatively monitored through the recognition and quantification of aggregates and filaments. A parameter defined as the ratio between the filaments' length and the aggregates projected area (LfA) has proven to be sensitive to detect changes in the aggregation status of the anaerobic sludge. The aggregation time-defined as the moment when a balance between filaments' length and aggregates' size was established-was recognized through the LfA. The percentage of projected area of aggregates within three size ranges (0.01-0.1 mm, 0.1-1 mm, and >1 mm, equivalent diameter) reflected the granular size spectrum during the aggregation process. When sudden increases on the upflow velocity and on the organic loading rate were applied to the previously formed granules, the developed image analysis techniques revealed to be good indicators of granular sludge stability, since they were sensitive to detected filaments release, fragmentation, and erosion that usually leads to washout. The specific methanogenic activities in the presence of acetate, propionate, butyrate, and H(2)/CO(2) increased along the operation, particularly relevant was the sudden increase in the specific hydrogenophilic activity, immediately after the moment recognized as aggregation time. Copyright 2004 Wiley Periodicals, Inc.

  6. Mapping disease at an approximated individual level using aggregate data: a case study of mapping New Hampshire birth defects.

    PubMed

    Shi, Xun; Miller, Stephanie; Mwenda, Kevin; Onda, Akikazu; Reese, Judy; Onega, Tracy; Gui, Jiang; Karagas, Margret; Demidenko, Eugene; Moeschler, John

    2013-09-06

    Limited by data availability, most disease maps in the literature are for relatively large and subjectively-defined areal units, which are subject to problems associated with polygon maps. High resolution maps based on objective spatial units are needed to more precisely detect associations between disease and environmental factors. We propose to use a Restricted and Controlled Monte Carlo (RCMC) process to disaggregate polygon-level location data to achieve mapping aggregate data at an approximated individual level. RCMC assigns a random point location to a polygon-level location, in which the randomization is restricted by the polygon and controlled by the background (e.g., population at risk). RCMC allows analytical processes designed for individual data to be applied, and generates high-resolution raster maps. We applied RCMC to the town-level birth defect data for New Hampshire and generated raster maps at the resolution of 100 m. Besides the map of significance of birth defect risk represented by p-value, the output also includes a map of spatial uncertainty and a map of hot spots. RCMC is an effective method to disaggregate aggregate data. An RCMC-based disease mapping maximizes the use of available spatial information, and explicitly estimates the spatial uncertainty resulting from aggregation.

  7. Using OCO-2 Observations and Lagrangian Modeling to Constrain Urban Carbon Dioxide Emissions in the Middle East

    NASA Astrophysics Data System (ADS)

    Yang, E. G.; Kort, E. A.; Ware, J.; Ye, X.; Lauvaux, T.; Wu, D.; Lin, J. C.; Oda, T.

    2017-12-01

    Anthropogenic carbon dioxide (CO2) emissions are greatly perturbing the Earth's carbon cycle. Rising emissions from the developing world are increasing uncertainties in global CO2 emissions. With the rapid urbanization of developing regions, methods of constraining urban CO2 emissions in these areas can address critical uncertainties in the global carbon budget. In this study, we work toward constraining urban CO2 emissions in the Middle East by comparing top-down observations and bottom-up simulations of total column CO2 (XCO2) in four cities (Riyadh, Cairo, Baghdad, and Doha), both separately and in aggregate. This comparison involves quantifying the relationship for all available data in the period of September 2014 until March 2016 between observations of XCO2 from the Orbiting Carbon Observatory-2 (OCO-2) satellite and simulations of XCO2 using the Stochastic Time-Inverted Lagrangian Transport (STILT) model coupled with Global Data Assimilation System (GDAS) reanalysis products and multiple CO2 emissions inventories. We discuss the extent to which our observation/model framework can distinguish between the different emissions representations and determine optimized emissions estimates for this domain. We also highlight the implications of our comparisons on the fidelity of the bottom-up inventories used, and how these implications may inform the use of OCO-2 data for urban regions around the world.

  8. Simulation of blue and green water resources in the Wei River basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Zuo, D.

    2014-09-01

    The Wei River is the largest tributary of the Yellow River in China and it is suffering from water scarcity and water pollution. In order to quantify the amount of water resources in the study area, a hydrological modelling approach was applied by using SWAT (Soil and Water Assessment Tool), calibrated and validated with SUFI-2 (Sequential Uncertainty Fitting program) based on river discharge in the Wei River basin (WRB). Sensitivity and uncertainty analyses were also performed to improve the model performance. Water resources components of blue water flow, green water flow and green water storage were estimated at the HRU (Hydrological Response Unit) scales. Water resources in HRUs were also aggregated to sub-basins, river catchments, and then city/region scales for further analysis. The results showed that most parts of the WRB experienced a decrease in blue water resources between the 1960s and 2000s, with a minimum value in the 1990s. The decrease is particularly significant in the most southern part of the WRB (Guanzhong Plain), one of the most important grain production basements in China. Variations of green water flow and green water storage were relatively small on the spatial and temporal dimensions. This study provides strategic information for optimal utilization of water resources and planning of cultivating seasons in the Wei River basin.

  9. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  10. Cataract-associated P23T γD-crystallin retains a native-like fold in amorphous-looking aggregates formed at physiological pH

    NASA Astrophysics Data System (ADS)

    Boatz, Jennifer C.; Whitley, Matthew J.; Li, Mingyue; Gronenborn, Angela M.; van der Wel, Patrick C. A.

    2017-05-01

    Cataracts cause vision loss through the large-scale aggregation of eye lens proteins as a result of ageing or congenital mutations. The development of new treatments is hindered by uncertainty about the nature of the aggregates and their mechanism of formation. We describe the structure and morphology of aggregates formed by the P23T human γD-crystallin mutant associated with congenital cataracts. At physiological pH, the protein forms aggregates that look amorphous and disordered by electron microscopy, reminiscent of the reported formation of amorphous deposits by other crystallin mutants. Surprisingly, solid-state NMR reveals that these amorphous deposits have a high degree of structural homogeneity at the atomic level and that the aggregated protein retains a native-like conformation, with no evidence for large-scale misfolding. Non-physiological destabilizing conditions used in many in vitro aggregation studies are shown to yield qualitatively different, highly misfolded amyloid-like fibrils.

  11. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  12. The Macro Dynamics of Weapon System Acquisition: Shaping Early Decisions to Get Better Outcomes

    DTIC Science & Technology

    2012-05-17

    defects and rework •Design tools and processes •Lack of feedback to key design and SE processes •Lack of quantified risk and uncertainty at key... Tools for Rapid Exploration of the Physical Design Space Coupling Operability, Interoperability, and Physical Feasibility Analyses – a Game Changer...Interoperability •Training Quantified Margins and Uncertainties at Each Critical Decision Point M&S RDT&E A Continuum of Tools Underpinned with

  13. The dynamics of soil aggregate breakdown in water in response to landuse as measured with laser diffraction technique

    NASA Astrophysics Data System (ADS)

    Oyedele, D. J.; Pini, R.; Sparvoli, E.; Scatena, M.

    2012-04-01

    The Mastersizer 2000G (Malvern Instruments) Diffraction Instrument was used to assess and quantify the breakdown of soil aggregates and compute wet aggregate stability indices. The study was aimed at evolving a novel rapid method of determining soil aggregate stability. Bulk surface (0-15 cm) soil samples were collected under 5 different land uses in the Teaching and Resrach Farm of Obafemi Awolowo University, Ile-Ife, Nigeria. About 0.5g of the soils aggregates (0.5 -1 mm diameter) were evaluated in the laser diffractometer with the stirrer operated at 500 rpm and the pump at 1800 rpm. The different size aggregates and particles of sand silt and clay were quantified periodically. Water stable aggregates greater than 250 µm (WSA>250), water stable aggregates less than 250 µm (WSA<250), water dispersible clay index (WDI), and mean volume diameter (MVD) among others were computed from the laser diffraction data. The values were compared with the classical Yoder wet sieving technique. The WSA>250 was significantly higher on the soils under Forest (FR), Cacao (CC), Teak (TK) and Oil Palm (OP) plantations, while it was significantly lowest under no-tillage (NT) and continuous cultivation (CT). The pasture (PD) was not significantly different from either the cultivated and the non-cultivated soils. Conversely, the WSA<250 and water dispersible clay index was highest in the cultivated soils (CT and NT) and lowest in the non-cultivated soils (FR, TK, CC and OP) while the PD was in-between. The MVD also followed a similar trend as the WSA>250. The wet sieving water stable aggregates index (WSI>250) was significantly correlated with WSA>250 (r = 0.75), MVD (r = 0.75), WDI (r = -0.68) and WSA<250 (r = - 0.73). All the laser diffraction measured aggregation indices were significantly correlated with the organic matter contents of the soils. Thus the laser diffraction promises a rapid and comprehensive method of evaluation of soil aggregate stability.

  14. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    Treesearch

    Ryan Wagner; Robert Moon; Jon Pratt; Gordon Shaw; Arvind Raman

    2011-01-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale...

  15. The Stock Market: Risk vs. Uncertainty.

    ERIC Educational Resources Information Center

    Griffitts, Dawn

    2002-01-01

    This economics education publication focuses on the U.S. stock market and the risk and uncertainty that an individual faces when investing in the market. The material explains that risk and uncertainty relate to the same underlying concept randomness. It defines and discusses both concepts and notes that although risk is quantifiable, uncertainty…

  16. Calculating Remote Sensing Reflectance Uncertainties Using an Instrument Model Propagated Through Atmospheric Correction via Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Karakoylu, E.; Franz, B.

    2016-01-01

    First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.

  17. Uncertainty in stormwater drainage adaptation: what matters and how much is too much?

    NASA Astrophysics Data System (ADS)

    Stack, L. J.; Simpson, M. H.; Moore, T.; Gulliver, J. S.; Roseen, R.; Eberhart, L.; Smith, J. B.; Gruber, J.; Yetka, L.; Wood, R.; Lawson, C.

    2014-12-01

    Published research continues to report that long-term, local-scale precipitation forecasts are too uncertain to support local-scale adaptation. Numerous studies quantify the range of uncertainty in downscaled model output; compare this with uncertainty from other sources such as hydrological modeling; and propose circumventing uncertainty via "soft" or "low regret" actions, or adaptive management. Yet non-structural adaptations alone are likely insufficient. Structural adaptation requires quantified engineering design specifications. However, the literature does not define a tolerable level of uncertainty. Without such a benchmark, how can we determine whether the climate-change-cognizant design specifications that we are capable of, for example the climate change factors increasingly utilized in European practice, are viable? The presentation will explore this question, in the context of reporting results and observations from an ongoing ten-year program assessing local-scale stormwater drainage system vulnerabilities, required capacities, and adaptation options and costs. This program has studied stormwater systems of varying complexity in a variety of regions, topographies, and levels of urbanization, in northern-New England and the upper-Midwestern United States. These studies demonstrate the feasibility of local-scale design specifications, and provide tangible information on risk to enable valid cost/benefit decisions. The research program has found that stormwater planners and engineers have routinely accepted, in the normal course of professional practice, a level of uncertainty in hydrological modeling comparable to that in long-term precipitation projections. Moreover, the ability to quantify required capacity and related construction costs for specific climate change scenarios, the insensitivity of capacity and costs to uncertainty, and the percentage of pipes and culverts that never require upsizing, all serve to limit the impact of uncertainty inherent in climate change projections.

  18. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  19. Best Practices of Uncertainty Estimation for the National Solar Radiation Database (NSRDB 1998-2015): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Sengupta, Manajit

    It is essential to apply a traceable and standard approach to determine the uncertainty of solar resource data. Solar resource data are used for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation, and to determine performance guarantees of solar energy conversion systems. These guarantees are based on the available solar resource derived from a measurement station or modeled data set such as the National Solar Radiation Database (NSRDB). Therefore, quantifying the uncertainty of these data sets provides confidence to financiers, developers, and site operators of solar energy conversion systems and ultimatelymore » reduces deployment costs. In this study, we implemented the Guide to the Expression of Uncertainty in Measurement (GUM) 1 to quantify the overall uncertainty of the NSRDB data. First, we start with quantifying measurement uncertainty, then we determine each uncertainty statistic of the NSRDB data, and we combine them using the root-sum-of-the-squares method. The statistics were derived by comparing the NSRDB data to the seven measurement stations from the National Oceanic and Atmospheric Administration's Surface Radiation Budget Network, National Renewable Energy Laboratory's Solar Radiation Research Laboratory, and the Atmospheric Radiation Measurement program's Southern Great Plains Central Facility, in Billings, Oklahoma. The evaluation was conducted for hourly values, daily totals, monthly mean daily totals, and annual mean monthly mean daily totals. Varying time averages assist to capture the temporal uncertainty of the specific modeled solar resource data required for each phase of a solar energy project; some phases require higher temporal resolution than others. Overall, by including the uncertainty of measurements of solar radiation made at ground stations, bias, and root mean square error, the NSRDB data demonstrated expanded uncertainty of 17 percent - 29 percent on hourly and an approximate 5 percent - 8 percent annual bases.« less

  20. Methodology for quantifying uncertainty in coal assessments with an application to a Texas lignite deposit

    USGS Publications Warehouse

    Olea, R.A.; Luppens, J.A.; Tewalt, S.J.

    2011-01-01

    A common practice for characterizing uncertainty in coal resource assessments has been the itemization of tonnage at the mining unit level and the classification of such units according to distance to drilling holes. Distance criteria, such as those used in U.S. Geological Survey Circular 891, are still widely used for public disclosure. A major deficiency of distance methods is that they do not provide a quantitative measure of uncertainty. Additionally, relying on distance between data points alone does not take into consideration other factors known to have an influence on uncertainty, such as spatial correlation, type of probability distribution followed by the data, geological discontinuities, and boundary of the deposit. Several geostatistical methods have been combined to formulate a quantitative characterization for appraising uncertainty. Drill hole datasets ranging from widespread exploration drilling to detailed development drilling from a lignite deposit in Texas were used to illustrate the modeling. The results show that distance to the nearest drill hole is almost completely unrelated to uncertainty, which confirms the inadequacy of characterizing uncertainty based solely on a simple classification of resources by distance classes. The more complex statistical methods used in this study quantify uncertainty and show good agreement between confidence intervals in the uncertainty predictions and data from additional drilling. ?? 2010.

  1. Reduction in soil aggregate size distribution due to wind erosion

    NASA Astrophysics Data System (ADS)

    Swet, Nitzan; Katra, Itzhak

    2017-04-01

    Soil erosion process by wind causes emission of fine soil particles, and thus alters the topsoil's properties, fertility, and erodibility. Topsoil resistance to erosion depends on its physicochemical properties, especially on the soil aggregation. Although the key role of aggregates in soil erodibility, quantitative information on the relations between soil aggregate size distribution (ASD) and erosion is still lucking. This study focuses on ASD analyses before and after soil erosion by wind. Wind tunnel experiments and soil analyses were conducted on semiarid loess topsoils with different initial conditions of aggregation. The results show that in all initial soil conditions saltation of sand particles caused the breakdown of macro-aggregates > 500 µm, resulting in increase of micro-aggregates (63-250 µm). The micro-aggregate production increases with the wind shear velocity (up to 0.61 m s-1) for soils with available macro-aggregates. The findings highlight dynamics in soil aggregation in response to erosion process, and therefore the significance of ASD in quantifying soil degradation and soil loss potential.

  2. A Statistics-Based Material Property Analysis to Support TPS Characterization

    NASA Technical Reports Server (NTRS)

    Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.

    2012-01-01

    Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.

  3. Interaction of DNA bases with silver nanoparticles: assembly quantified through SPRS and SERS.

    PubMed

    Basu, Soumen; Jana, Subhra; Pande, Surojit; Pal, Tarasankar

    2008-05-15

    Colloidal silver nanoparticles were prepared by reducing silver nitrate with sodium borohydride. The synthesized silver particles show an intense surface plasmon band in the visible region. The work reported here describes the interaction between nanoscale silver particles and various DNA bases (adenine, guanine, cytosine, and thymine), which are used as molecular linkers because of their biological significance. In colloidal solutions, the color of silver nanoparticles may range from red to purple to orange to blue, depending on the degree of aggregation as well as the orientation of the individual particles within the aggregates. Transmission electron microscopy (TEM), X-ray diffraction (XRD), and absorption spectroscopy were used to characterize the assemblies. DNA base-induced differential silver nanoparticle aggregation was quantified from the peak separation (relates to color) of surface plasmon resonance spectroscopy (SPRS) and the signal intensity of surface-enhanced Raman scattering (SERS), which rationalize the extent of silver-nucleobase interactions.

  4. Geotechnical risk analysis user's guide

    DOT National Transportation Integrated Search

    1987-03-01

    All geotechnical predictions involve uncertainties. These are accounted for additionally by conservative factors of safety. Risk based design, on the other hand, attempts to quantify uncertainties and to adjust design conservatism accordingly. Such m...

  5. Application of a Monte Carlo framework with bootstrapping for quantification of uncertainty in baseline map of carbon emissions from deforestation in Tropical Regions

    Treesearch

    William Salas; Steve Hagen

    2013-01-01

    This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...

  6. Bayesian optimization of the Community Land Model simulated biosphere-atmosphere exchange using CO 2 observations from a dense tower network and aircraft campaigns over Oregon

    DOE PAGES

    Schmidt, Andres; Law, Beverly E.; Göckede, Mathias; ...

    2016-09-15

    Here, the vast forests and natural areas of the Pacific Northwest comprise one of the most productive ecosystems in the northern hemisphere. The heterogeneous landscape of Oregon poses a particular challenge to ecosystem models. We present a framework using a scaling factor Bayesian inversion to improve the modeled atmosphere-biosphere exchange of carbon dioxide. Observations from 5 CO/CO 2 towers, eddy covariance towers, and airborne campaigns were used to constrain the Community Land Model CLM4.5 simulated terrestrial CO 2 exchange at a high spatial and temporal resolution (1/24°, 3-hourly). To balance aggregation errors and the degrees of freedom in the inversemore » modeling system, we applied an unsupervised clustering approach for the spatial structuring of our model domain. Data from flight campaigns were used to quantify the uncertainty introduced by the Lagrangian particle dispersion model that was applied for the inversions. The average annual statewide net ecosystem productivity (NEP) was increased by 32% to 29.7 TgC per year by assimilating the tropospheric mixing ratio data. The associated uncertainty was decreased by 28.4% to 29%, on average over the entire Oregon model domain with the lowest uncertainties of 11% in western Oregon. The largest differences between posterior and prior CO 2 fluxes were found for the Coast Range ecoregion of Oregon that also exhibits the highest availability of atmospheric observations and associated footprints. In this area, covered by highly productive Douglas-fir forest, the differences between the prior and posterior estimate of NEP averaged 3.84 TgC per year during the study period from 2012 through 2014.« less

  7. Bayesian optimization of the Community Land Model simulated biosphere-atmosphere exchange using CO 2 observations from a dense tower network and aircraft campaigns over Oregon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Andres; Law, Beverly E.; Göckede, Mathias

    Here, the vast forests and natural areas of the Pacific Northwest comprise one of the most productive ecosystems in the northern hemisphere. The heterogeneous landscape of Oregon poses a particular challenge to ecosystem models. We present a framework using a scaling factor Bayesian inversion to improve the modeled atmosphere-biosphere exchange of carbon dioxide. Observations from 5 CO/CO 2 towers, eddy covariance towers, and airborne campaigns were used to constrain the Community Land Model CLM4.5 simulated terrestrial CO 2 exchange at a high spatial and temporal resolution (1/24°, 3-hourly). To balance aggregation errors and the degrees of freedom in the inversemore » modeling system, we applied an unsupervised clustering approach for the spatial structuring of our model domain. Data from flight campaigns were used to quantify the uncertainty introduced by the Lagrangian particle dispersion model that was applied for the inversions. The average annual statewide net ecosystem productivity (NEP) was increased by 32% to 29.7 TgC per year by assimilating the tropospheric mixing ratio data. The associated uncertainty was decreased by 28.4% to 29%, on average over the entire Oregon model domain with the lowest uncertainties of 11% in western Oregon. The largest differences between posterior and prior CO 2 fluxes were found for the Coast Range ecoregion of Oregon that also exhibits the highest availability of atmospheric observations and associated footprints. In this area, covered by highly productive Douglas-fir forest, the differences between the prior and posterior estimate of NEP averaged 3.84 TgC per year during the study period from 2012 through 2014.« less

  8. Modeling spatial-temporal dynamics of global wetlands: comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zimmermann, N. E.; Poulter, B.

    2015-11-01

    Simulations of the spatial-temporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl dynamic global vegetation model (DGVM), and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland dataset can help to successfully delineate the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ∼ 10.3 Mkm2 (106 km2), with a mean annual maximum of ∼ 5.17 Mkm2 for 1980-2010. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.

  9. Assessing the strength of soil aggregates produced by two types of organic matter amendments using the ultrasonic energy

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaolong; minasny, Budiman; Field, Damien; Angers, Denis

    2017-04-01

    The presence of organic matter (OM) is known to stimulate the formation of soil aggregates, but the aggregation strength may vary with different amount and type/quality of OM. Conventionally wet sieving method was used to assess the aggregates' strength. In this study, we wish to get insight of the effects of different types of C inputs on aggregate dynamics using quantifiable energy via ultrasonic agitation. A clay soil with an inherently low soil organic carbon (SOC) content, was amended with two different sources of organic matter (alfalfa, C:N = 16.7 and barley straw, C:N = 95.6) at different input levels (0, 10, 20, & 30 g C kg-1 soil). The soil's inherent macro aggregates were first destroyed via puddling. The soils were incubated in pots at moisture content 70% of field capacity for a period of 3 months. The pots were housed in a 1.2L sealed opaque plastic container. The CO2 generated during the incubation was captured by a vial of NaOH which was placed in each of the sealed containers and sampled per week. At 14, 28, 56, and 84 days, soil samples were collected and the change in aggregation was assessed using a combination of wet sieving and ultrasonic agitation. The relative strength of aggregates exposed to ultrasonic agitation was modelled using the aggregate disruption characteristic curve (ADCC) and soil dispersion characteristic curve (SDCC). Both residue quality and quantity of organic matter input influenced the amount of aggregates formed and their relative strength. The MWD of soils amended with alfalfa residues was greater than that of barley straw at lower input rates and early in the incubation. In the longer term, the use of ultrasonic energy revealed that barley straw resulted in stronger aggregates, especially at higher input rates despite showing similar MWD as alfalfa. The use of ultrasonic agitation, where we quantify the energy required to liberate and disperse aggregates allowed us to differentiate the effects of C inputs on the size of stable aggregates and their relative strength.

  10. Uncertainties in historical pollution data from sedimentary records from an Australian urban floodplain lake

    NASA Astrophysics Data System (ADS)

    Lintern, A.; Leahy, P.; Deletic, A.; Heijnis, H.; Zawadzki, A.; Gadd, P.; McCarthy, D.

    2018-05-01

    Sediment cores from aquatic environments can provide valuable information about historical pollution levels and sources. However, there is little understanding of the uncertainties associated with these findings. The aim of this study is to fill this knowledge gap by proposing a framework for quantifying the uncertainties in historical heavy metal pollution records reconstructed from sediment cores. This uncertainty framework consists of six sources of uncertainty: uncertainties in (1) metals analysis methods, (2) spatial variability of sediment core heavy metal profiles, (3) sub-sampling intervals, (4) the sediment chronology, (5) the assumption that metal levels in bed sediments reflect the magnitude of metal inputs into the aquatic system, and (6) post-depositional transformation of metals. We apply this uncertainty framework to an urban floodplain lake in South-East Australia (Willsmere Billabong). We find that for this site, uncertainties in historical dated heavy metal profiles can be up to 176%, largely due to uncertainties in the sediment chronology, and in the assumption that the settled heavy metal mass is equivalent to the heavy metal mass entering the aquatic system. As such, we recommend that future studies reconstructing historical pollution records using sediment cores from aquatic systems undertake an investigation of the uncertainties in the reconstructed pollution record, using the uncertainty framework provided in this study. We envisage that quantifying and understanding the uncertainties associated with the reconstructed pollution records will facilitate the practical application of sediment core heavy metal profiles in environmental management projects.

  11. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  12. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  13. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.more » Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes. - Highlights: • Proposed a physics–informed framework to quantify uncertainty in RANS simulations. • Framework incorporates physical prior knowledge and observation data. • Based on a rigorous Bayesian framework yet fully utilizes physical model. • Applicable for many complex physical systems beyond turbulent flows.« less

  14. Estimating Uncertainty in N2O Emissions from US Cropland Soils

    USDA-ARS?s Scientific Manuscript database

    A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...

  15. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  16. A method to solubilise protein aggregates for immunoassay quantification which overcomes the neurofilament "hook" effect.

    PubMed

    Lu, Ching-Hua; Kalmar, Bernadett; Malaspina, Andrea; Greensmith, Linda; Petzold, Axel

    2011-02-15

    Neurofilament (Nf) aggregates are a common pathological feature of neurodegenerative disorders. Although Nf levels have been investigated as a potential disease biomarker, Nf aggregates may mask Nf epitopes, preventing accurate quantification by immunoassay. Using the SOD1(G93A) mouse model of amyotrophic lateral sclerosis, we developed a method to disrupt Nf aggregates, allowing optimal immunoassay performance. Phosphorylated (NfH(SMI35)) and hyperphosphorylated (NfH(SMI34)) Nf levels in plasma from 120-day SOD1(G93A) mice were quantified using an in-house ELISA modified for use with small volumes. Different pre-analytical methods were tested for their ability to solubilize Nf aggregates and immunoblotting was used for qualitative analysis. A 'hook effect' was observed for serially diluted plasma samples quantified using an ELISA originally developed for CSF samples. Immunoblotting confirmed the existence of high molecular-weight NfH aggregates in plasma and the resolving effect of timed urea on these aggregates. Thermostatic (pre-thawing) and chemical (calcium chelators, urea) pre-analytical processing of samples had variable success in disrupting NfH aggregates. Timed urea-calcium chelator incubation yielded the most consistent plasma NfH levels. A one hour sample pre-incubation with 0.5M urea in Barbitone-EDTA buffer at room temperature resolved the "hook effect" without compromising the ELISA. In SOD1(G93A) mice, median levels of NfH(SMI34) were over 10-fold and NfH(SMI35) levels 5-fold greater than controls. NfH aggregates can be solubilised and the "hook effect" abolished by a one-hour sample pre-incubation in a urea-calcium chelator-enriched buffer. This method is applicable for quantification of NfH phosphoforms in experimental and disease settings where Nf aggregate formation occurs. © 2010 Elsevier B.V. All rights reserved.

  17. Effects of Phasor Measurement Uncertainty on Power Line Outage Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Zhu, Hao

    2014-12-01

    Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less

  18. Quantifying the intra-annual uncertainties in climate change assessment over 10 sub-basins across the Pacific Northwest US

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Moradkhani, Hamid; Rana, Arun

    2017-04-01

    Uncertainty is an inevitable feature of climate change impact assessments. Understanding and quantifying different sources of uncertainty is of high importance, which can help modeling agencies improve the current models and scenarios. In this study, we have assessed the future changes in three climate variables (i.e. precipitation, maximum temperature, and minimum temperature) over 10 sub-basins across the Pacific Northwest US. To conduct the study, 10 statistically downscaled CMIP5 GCMs from two downscaling methods (i.e. BCSD and MACA) were utilized at 1/16 degree spatial resolution for the historical period of 1970-2000 and future period of 2010-2099. For the future projections, two future scenarios of RCP4.5 and RCP8.5 were used. Furthermore, Bayesian Model Averaging (BMA) was employed to develop a probabilistic future projection for each climate variable. Results indicate superiority of BMA simulations compared to individual models. Increasing temperature and precipitation are projected at annual timescale. However, the changes are not uniform among different seasons. Model uncertainty shows to be the major source of uncertainty, while downscaling uncertainty significantly contributes to the total uncertainty, especially in summer.

  19. Uncertainty in Simulating Wheat Yields Under Climate Change

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.; hide

    2013-01-01

    Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.

  20. The integrated effects of future climate and hydrologic uncertainty on sustainable flood risk management

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Wi, S.; Brown, C. M.

    2013-12-01

    Flood risk management performance is investigated within the context of integrated climate and hydrologic modeling uncertainty to explore system robustness. The research question investigated is whether structural and hydrologic parameterization uncertainties are significant relative to other uncertainties such as climate change when considering water resources system performance. Two hydrologic models are considered, a conceptual, lumped parameter model that preserves the water balance and a physically-based model that preserves both water and energy balances. In the conceptual model, parameter and structural uncertainties are quantified and propagated through the analysis using a Bayesian modeling framework with an innovative error model. Mean climate changes and internal climate variability are explored using an ensemble of simulations from a stochastic weather generator. The approach presented can be used to quantify the sensitivity of flood protection adequacy to different sources of uncertainty in the climate and hydrologic system, enabling the identification of robust projects that maintain adequate performance despite the uncertainties. The method is demonstrated in a case study for the Coralville Reservoir on the Iowa River, where increased flooding over the past several decades has raised questions about potential impacts of climate change on flood protection adequacy.

  1. Using Human iPSC-Derived Neurons to Model TAU Aggregation

    PubMed Central

    Verheyen, An; Diels, Annick; Dijkmans, Joyce; Oyelami, Tutu; Meneghello, Giulia; Mertens, Liesbeth; Versweyveld, Sofie; Borgers, Marianne; Buist, Arjan; Peeters, Pieter; Cik, Miroslav

    2015-01-01

    Alzheimer’s disease and frontotemporal dementia are amongst the most common forms of dementia characterized by the formation and deposition of abnormal TAU in the brain. In order to develop a translational human TAU aggregation model suitable for screening, we transduced TAU harboring the pro-aggregating P301L mutation into control hiPSC-derived neural progenitor cells followed by differentiation into cortical neurons. TAU aggregation and phosphorylation was quantified using AlphaLISA technology. Although no spontaneous aggregation was observed upon expressing TAU-P301L in neurons, seeding with preformed aggregates consisting of the TAU-microtubule binding repeat domain triggered robust TAU aggregation and hyperphosphorylation already after 2 weeks, without affecting general cell health. To validate our model, activity of two autophagy inducers was tested. Both rapamycin and trehalose significantly reduced TAU aggregation levels suggesting that iPSC-derived neurons allow for the generation of a biologically relevant human Tauopathy model, highly suitable to screen for compounds that modulate TAU aggregation. PMID:26720731

  2. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  3. Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    2017-03-08

    Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.

  4. Water Table Uncertainties due to Uncertainties in Structure and Properties of an Unconfined Aquifer.

    PubMed

    Hauser, Juerg; Wellmann, Florian; Trefry, Mike

    2018-03-01

    We consider two sources of geology-related uncertainty in making predictions of the steady-state water table elevation for an unconfined aquifer. That is the uncertainty in the depth to base of the aquifer and in the hydraulic conductivity distribution within the aquifer. Stochastic approaches to hydrological modeling commonly use geostatistical techniques to account for hydraulic conductivity uncertainty within the aquifer. In the absence of well data allowing derivation of a relationship between geophysical and hydrological parameters, the use of geophysical data is often limited to constraining the structural boundaries. If we recover the base of an unconfined aquifer from an analysis of geophysical data, then the associated uncertainties are a consequence of the geophysical inversion process. In this study, we illustrate this by quantifying water table uncertainties for the unconfined aquifer formed by the paleochannel network around the Kintyre Uranium deposit in Western Australia. The focus of the Bayesian parametric bootstrap approach employed for the inversion of the available airborne electromagnetic data is the recovery of the base of the paleochannel network and the associated uncertainties. This allows us to then quantify the associated influences on the water table in a conceptualized groundwater usage scenario and compare the resulting uncertainties with uncertainties due to an uncertain hydraulic conductivity distribution within the aquifer. Our modeling shows that neither uncertainties in the depth to the base of the aquifer nor hydraulic conductivity uncertainties alone can capture the patterns of uncertainty in the water table that emerge when the two are combined. © 2017, National Ground Water Association.

  5. APPLICATION AND EVALUATION OF AN AGGREGATE PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL FOR QUANTIFYING CHILDREN'S RESIDENTIAL EXPOSURE AND DOSE TO CHLORPYRIFOS

    EPA Science Inventory

    Critical voids in exposure data and models lead risk assessors to rely on conservative assumptions. Risk assessors and managers need improved tools beyond the screening level analysis to address aggregate exposures to pesticides as required by the Food Quality Protection Act o...

  6. Uncertainty analysis in vulnerability estimations for elements at risk- a review of concepts and some examples on landslides

    NASA Astrophysics Data System (ADS)

    Ciurean, R. L.; Glade, T.

    2012-04-01

    Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.

  7. Hail formation triggers rapid ash aggregation in volcanic plumes

    USGS Publications Warehouse

    Van Eaton, Alexa R.; Mastin, Larry G.; Herzog, M.; Schwaiger, Hans F.; Schneider, David J.; Wallace, Kristi; Clarke, Amanda B

    2015-01-01

    During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized ‘wet’ eruption. The 2009 eruption of Redoubt Volcano in Alaska incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits, and numerical modeling demonstrate that volcanic hail formed rapidly in the eruption plume, leading to mixed-phase aggregation of ~95% of the fine ash and stripping much of the cloud out of the atmosphere within 30 minutes. Based on these findings, we propose a mechanism of hail-like aggregation that contributes to the anomalously rapid fallout of fine ash and the occurrence of concentrically-layered aggregates in volcanic deposits.

  8. Hail formation triggers rapid ash aggregation in volcanic plumes.

    PubMed

    Van Eaton, Alexa R; Mastin, Larry G; Herzog, Michael; Schwaiger, Hans F; Schneider, David J; Wallace, Kristi L; Clarke, Amanda B

    2015-08-03

    During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized 'wet' eruption. The 2009 eruption of Redoubt Volcano, Alaska, incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits and numerical modelling demonstrate that hail-forming processes in the eruption plume triggered aggregation of ∼95% of the fine ash and stripped much of the erupted mass out of the atmosphere within 30 min. Based on these findings, we propose a mechanism of hail-like ash aggregation that contributes to the anomalously rapid fallout of fine ash and occurrence of concentrically layered aggregates in volcanic deposits.

  9. Hail formation triggers rapid ash aggregation in volcanic plumes

    PubMed Central

    Van Eaton, Alexa R.; Mastin, Larry G.; Herzog, Michael; Schwaiger, Hans F.; Schneider, David J.; Wallace, Kristi L.; Clarke, Amanda B.

    2015-01-01

    During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized ‘wet' eruption. The 2009 eruption of Redoubt Volcano, Alaska, incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits and numerical modelling demonstrate that hail-forming processes in the eruption plume triggered aggregation of ∼95% of the fine ash and stripped much of the erupted mass out of the atmosphere within 30 min. Based on these findings, we propose a mechanism of hail-like ash aggregation that contributes to the anomalously rapid fallout of fine ash and occurrence of concentrically layered aggregates in volcanic deposits. PMID:26235052

  10. Measuring, Estimating, and Deciding under Uncertainty.

    PubMed

    Michel, Rolf

    2016-03-01

    The problem of uncertainty as a general consequence of incomplete information and the approach to quantify uncertainty in metrology is addressed. Then, this paper discusses some of the controversial aspects of the statistical foundation of the concepts of uncertainty in measurements. The basics of the ISO Guide to the Expression of Uncertainty in Measurement as well as of characteristic limits according to ISO 11929 are described and the needs for a revision of the latter standard are explained. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The effect of short-range spatial variability on soil sampling uncertainty.

    PubMed

    Van der Perk, Marcel; de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Fajgelj, Ales; Sansone, Umberto; Jeran, Zvonka; Jaćimović, Radojko

    2008-11-01

    This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.

  12. Including non-dietary sources into an exposure assessment of the European Food Safety Authority: The challenge of multi-sector chemicals such as Bisphenol A.

    PubMed

    von Goetz, N; Pirow, R; Hart, A; Bradley, E; Poças, F; Arcella, D; Lillegard, I T L; Simoneau, C; van Engelen, J; Husoy, T; Theobald, A; Leclercq, C

    2017-04-01

    In the most recent risk assessment for Bisphenol A for the first time a multi-route aggregate exposure assessment was conducted by the European Food Safety Authority. This assessment includes exposure via dietary sources, and also contributions of the most important non-dietary sources. Both average and high aggregate exposure were calculated by source-to-dose modeling (forward calculation) for different age groups and compared with estimates based on urinary biomonitoring data (backward calculation). The aggregate exposure estimates obtained by forward and backward modeling are in the same order of magnitude, with forward modeling yielding higher estimates associated with larger uncertainty. Yet, only forward modeling can indicate the relative contribution of different sources. Dietary exposure, especially via canned food, appears to be the most important exposure source and, based on the central aggregate exposure estimates, contributes around 90% to internal exposure to total (conjugated plus unconjugated) BPA. Dermal exposure via thermal paper and to a lesser extent via cosmetic products may contribute around 10% for some age groups. The uncertainty around these estimates is considerable, but since after dermal absorption a first-pass metabolism of BPA by conjugation is lacking, dermal sources may be of equal or even higher toxicological relevance than dietary sources. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Bifunctional fluorescent probes for detection of amyloid aggregates and reactive oxygen species

    NASA Astrophysics Data System (ADS)

    Needham, Lisa-Maria; Weber, Judith; Fyfe, James W. B.; Kabia, Omaru M.; Do, Dung T.; Klimont, Ewa; Zhang, Yu; Rodrigues, Margarida; Dobson, Christopher M.; Ghandi, Sonia; Bohndiek, Sarah E.; Snaddon, Thomas N.; Lee, Steven F.

    2018-02-01

    Protein aggregation into amyloid deposits and oxidative stress are key features of many neurodegenerative disorders including Parkinson's and Alzheimer's disease. We report here the creation of four highly sensitive bifunctional fluorescent probes, capable of H2O2 and/or amyloid aggregate detection. These bifunctional sensors use a benzothiazole core for amyloid localization and boronic ester oxidation to specifically detect H2O2. We characterized the optical properties of these probes using both bulk fluorescence measurements and single-aggregate fluorescence imaging, and quantify changes in their fluorescence properties upon addition of amyloid aggregates of α-synuclein and pathophysiological H2O2 concentrations. Our results indicate these new probes will be useful to detect and monitor neurodegenerative disease.

  14. A model for bacterial colonization of sinking aggregates.

    PubMed

    Bearon, R N

    2007-01-01

    Sinking aggregates provide important nutrient-rich environments for marine bacteria. Quantifying the rate at which motile bacteria colonize such aggregations is important in understanding the microbial loop in the pelagic food web. In this paper, a simple analytical model is presented to predict the rate at which bacteria undergoing a random walk encounter a sinking aggregate. The model incorporates the flow field generated by the sinking aggregate, the swimming behavior of the bacteria, and the interaction of the flow with the swimming behavior. An expression for the encounter rate is computed in the limit of large Péclet number when the random walk can be approximated by a diffusion process. Comparison with an individual-based numerical simulation is also given.

  15. Bifunctional fluorescent probes for detection of amyloid aggregates and reactive oxygen species.

    PubMed

    Needham, Lisa-Maria; Weber, Judith; Fyfe, James W B; Kabia, Omaru M; Do, Dung T; Klimont, Ewa; Zhang, Yu; Rodrigues, Margarida; Dobson, Christopher M; Ghandi, Sonia; Bohndiek, Sarah E; Snaddon, Thomas N; Lee, Steven F

    2018-02-01

    Protein aggregation into amyloid deposits and oxidative stress are key features of many neurodegenerative disorders including Parkinson's and Alzheimer's disease. We report here the creation of four highly sensitive bifunctional fluorescent probes, capable of H 2 O 2 and/or amyloid aggregate detection. These bifunctional sensors use a benzothiazole core for amyloid localization and boronic ester oxidation to specifically detect H 2 O 2 . We characterized the optical properties of these probes using both bulk fluorescence measurements and single-aggregate fluorescence imaging, and quantify changes in their fluorescence properties upon addition of amyloid aggregates of α-synuclein and pathophysiological H 2 O 2 concentrations. Our results indicate these new probes will be useful to detect and monitor neurodegenerative disease.

  16. Bifunctional fluorescent probes for detection of amyloid aggregates and reactive oxygen species

    PubMed Central

    Needham, Lisa-Maria; Weber, Judith; Fyfe, James W. B.; Kabia, Omaru M.; Do, Dung T.; Klimont, Ewa; Zhang, Yu; Rodrigues, Margarida; Dobson, Christopher M.; Ghandi, Sonia; Bohndiek, Sarah E.; Snaddon, Thomas N.

    2018-01-01

    Protein aggregation into amyloid deposits and oxidative stress are key features of many neurodegenerative disorders including Parkinson's and Alzheimer's disease. We report here the creation of four highly sensitive bifunctional fluorescent probes, capable of H2O2 and/or amyloid aggregate detection. These bifunctional sensors use a benzothiazole core for amyloid localization and boronic ester oxidation to specifically detect H2O2. We characterized the optical properties of these probes using both bulk fluorescence measurements and single-aggregate fluorescence imaging, and quantify changes in their fluorescence properties upon addition of amyloid aggregates of α-synuclein and pathophysiological H2O2 concentrations. Our results indicate these new probes will be useful to detect and monitor neurodegenerative disease. PMID:29515860

  17. GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.

    2014-01-01

    This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.

  18. SU-E-J-159: Intra-Patient Deformable Image Registration Uncertainties Quantified Using the Distance Discordance Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, Z; Thor, M; Apte, A

    2014-06-01

    Purpose: The quantitative evaluation of deformable image registration (DIR) is currently challenging due to lack of a ground truth. In this study we test a new method proposed for quantifying multiple-image based DIRrelated uncertainties, for DIR of pelvic images. Methods: 19 patients were analyzed, each with 6 CT scans, who previously had radiotherapy for prostate cancer. Manually delineated structures for rectum and bladder, which served as ground truth structures, were delineated on the planning CT and each subsequent scan. For each patient, voxel-by-voxel DIR-related uncertainties were evaluated, following B-spline based DIR, by applying a previously developed metric, the distance discordancemore » metric (DDM; Saleh et al., PMB (2014) 59:733). The DDM map was superimposed on the first acquired CT scan and DDM statistics were assessed, also relative to two metrics estimating the agreement between the propagated and the manually delineated structures. Results: The highest DDM values which correspond to greatest spatial uncertainties were observed near the body surface and in the bowel due to the presence of gas. The mean rectal and bladder DDM values ranged from 1.1–11.1 mm and 1.5–12.7 mm, respectively. There was a strong correlation in the DDMs between the rectum and bladder (Pearson R = 0.68 for the max DDM). For both structures, DDM was correlated with the ratio between the DIR-propagated and manually delineated volumes (R = 0.74 for the max rectal DDM). The maximum rectal DDM was negatively correlated with the Dice Similarity Coefficient between the propagated and the manually delineated volumes (R= −0.52). Conclusion: The multipleimage based DDM map quantified considerable DIR variability across different structures and among patients. Besides using the DDM for quantifying DIR-related uncertainties it could potentially be used to adjust for uncertainties in DIR-based accumulated dose distributions.« less

  19. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  20. A New Combined Stepwise-Based High-Order Decoupled Direct and Reduced-Form Method To Improve Uncertainty Analysis in PM2.5 Simulations.

    PubMed

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin

    2017-04-04

    The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.

  1. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  2. Leaf area index uncertainty estimates for model-data fusion applications

    Treesearch

    Andrew D. Richardson; D. Bryan Dail; D.Y. Hollinger

    2011-01-01

    Estimates of data uncertainties are required to integrate different observational data streams as model constraints using model-data fusion. We describe an approach with which random and systematic uncertainties in optical measurements of leaf area index [LAI] can be quantified. We use data from a measurement campaign at the spruce-dominated Howland Forest AmeriFlux...

  3. Quantifying chemical uncertainties in simulations of the ISM

    NASA Astrophysics Data System (ADS)

    Glover, Simon

    2018-06-01

    The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.

  4. Order Under Uncertainty: Robust Differential Expression Analysis Using Probabilistic Models for Pseudotime Inference

    PubMed Central

    Campbell, Kieran R.

    2016-01-01

    Single cell gene expression profiling can be used to quantify transcriptional dynamics in temporal processes, such as cell differentiation, using computational methods to label each cell with a ‘pseudotime’ where true time series experimentation is too difficult to perform. However, owing to the high variability in gene expression between individual cells, there is an inherent uncertainty in the precise temporal ordering of the cells. Pre-existing methods for pseudotime estimation have predominantly given point estimates precluding a rigorous analysis of the implications of uncertainty. We use probabilistic modelling techniques to quantify pseudotime uncertainty and propagate this into downstream differential expression analysis. We demonstrate that reliance on a point estimate of pseudotime can lead to inflated false discovery rates and that probabilistic approaches provide greater robustness and measures of the temporal resolution that can be obtained from pseudotime inference. PMID:27870852

  5. From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches

    PubMed Central

    Potter, Kristin; Rosen, Paul; Johnson, Chris R.

    2014-01-01

    Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949

  6. Evaluation of reclaimed asphalt pavement for surface mixtures.

    DOT National Transportation Integrated Search

    2012-03-01

    The Indiana Department of Transportation has successfully used Reclaimed Asphalt Pavement (RAP) for decades because of its economic : and environmental benefits. Because of uncertainties regarding the types of aggregates contained in RAP and their re...

  7. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  8. Optical network scaling: roles of spectral and spatial aggregation.

    PubMed

    Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M

    2014-12-01

    As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.

  9. Cell and Particle Interactions and Aggregation During Electrophoretic Motion

    NASA Technical Reports Server (NTRS)

    Davis, Robert H.

    2000-01-01

    The objectives of this research were (i) to perform experiments for observing and quantifying electrophoretic aggregation, (ii) to develop a theoretical description to appropriately analyze and compare with the experimental results, (iii) to study the combined effects of electrophoretic and gravitational aggregation of large particles, and the combined effects of electrophoretic and Brownian aggregation of small particles, and (iv) to perform a preliminary design of a potential future flight experiment involving electrophoretic aggregation. Electrophoresis refers to the motion of charged particles, droplets or molecules in response to an applied electric field. Electrophoresis is commonly used for analysis and separation of biological particles or molecules. When particles have different surface charge densities or potentials, they will migrate at different velocities in an electric field. This differential migration leads to the possibility that they will collide and aggregate, thereby preventing separation.

  10. A stochastic risk assessment for Eastern Europe and Central Asian countries for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Schaefer, Andreas; Toro, Joaquin; Murnane, Rick; Tijssen, Annegien; Simpson, Alanna; Saito, Keiko; Winsemius, Hessel; Ward, Philip

    2015-04-01

    This systematic assessment of earthquake risk for 33 countries in the ECA region was motivated by the interest of the World Bank and the Global Facility for Disaster Reduction and Recovery (GFDRR) in supporting Disaster Risk Management (DRM) efforts. They envisaged an exposure-based analysis that looked at the potential economic and/or social exposure of the populations of various countries to earthquake risk. Using a stochastic earthquake hazard model and historical catalogues, a unified earthquake catalogue was created for the 33 countries. A combined fault and background source model was created using data from many authors. The maximum magnitude and seismotectonic source zone discretization was undertaken using logic tree approaches. Site effects were taken into account on the basis of local topography and tectonic regime. Two approaches were used to calculate local ground motion - intensity prediction equations for MMI and a combination of GMPEs for stable and active settings. A 1km grid was used for analysis with aggregations of exposure quantified in terms of GDP and capital stock using disaggregated provincial analysis from CATDAT, as well as population data from Deltares. Vulnerability functions were calculated using socio-economic empirical functions derived by Daniell (2014) for the countries taking into account historical losses, seismic resistant code implementation and building typologies in each country. PML curves were created for each province in the 33 nations, through 3 methods; the 1st using direct historical values via the CATDAT Damaging Earthquakes Database; the 2nd using normalization procedures in order to provide a quick estimate of the historical record quantified in today's terms filling in gaps; and the 3rd being a traditional stochastic modelling approach over a period of 10,000 years taking all uncertainties into account. SSP projections of growth from the OECD were used to quantify the risk in 2010, 2030 and 2080 in order to examine future loss potential. Four loss metrics were quantified as PML curves - (1) Population affected in damaged areas (I>6), (2) GDP affected in damaged areas (I>6), (3) Deaths, (4) Economic Losses. The approach taken has large uncertainties, as with any stochastic earthquake risk analysis, and more detailed and refined analysis can be undertaken in any one of the countries, but the approach showed a ballpark figure for planning and development as well as a view as to expected losses where existing detailed models do not exist.

  11. Uncertainties in Eddy Covariance fluxes due to post-field data processing: a multi-site, full factorial analysis

    NASA Astrophysics Data System (ADS)

    Sabbatini, S.; Fratini, G.; Arriga, N.; Papale, D.

    2012-04-01

    Eddy Covariance (EC) is the only technologically available direct method to measure carbon and energy fluxes between ecosystems and atmosphere. However, uncertainties related to this method have not been exhaustively assessed yet, including those deriving from post-field data processing. The latter arise because there is no exact processing sequence established for any given situation, and the sequence itself is long and complex, with many processing steps and options available. However, the consistency and inter-comparability of flux estimates may be largely affected by the adoption of different processing sequences. The goal of our work is to quantify the uncertainty introduced in each processing step by the fact that different options are available, and to study how the overall uncertainty propagates throughout the processing sequence. We propose an easy-to-use methodology to assign a confidence level to the calculated fluxes of energy and mass, based on the adopted processing sequence, and on available information such as the EC system type (e.g. open vs. closed path), the climate and the ecosystem type. The proposed methodology synthesizes the results of a massive full-factorial experiment. We use one year of raw data from 15 European flux stations and process them so as to cover all possible combinations of the available options across a selection of the most relevant processing steps. The 15 sites have been selected to be representative of different ecosystems (forests, croplands and grasslands), climates (mediterranean, nordic, arid and humid) and instrumental setup (e.g. open vs. closed path). The software used for this analysis is EddyPro™ 3.0 (www.licor.com/eddypro). The critical processing steps, selected on the basis of the different options commonly used in the FLUXNET community, are: angle of attack correction; coordinate rotation; trend removal; time lag compensation; low- and high- frequency spectral correction; correction for air density fluctuations; and length of the flux averaging interval. We illustrate the results of the full-factorial combination relative to a subset of the selected sites with particular emphasis on the total uncertainty at different time scales and aggregations, as well as a preliminary analysis of the most critical steps for their contribution to the total uncertainties and their potential relation with site set-up characteristics and ecosystem type.

  12. Do Lanice conchilega (sandmason) aggregations classify as reefs? Quantifying habitat modifying effects

    NASA Astrophysics Data System (ADS)

    Rabaut, Marijn; Vincx, Magda; Degraer, Steven

    2009-03-01

    The positive effects of the tube dwelling polychaete Lanice conchilega for the associated benthic community emphasizes this bio-engineer’s habitat structuring capacity (Rabaut et al. in Estuar Coastal Shelf Sci, 2007). Therefore, L. conchilega aggregations are often referred to as reefs. The reef building capacity of ecosystem engineers is important for marine management as the recognition as reef builder will increase the protected status the concerned species. To classify as reefs however, bio-engineering activities need to significantly alter several habitat characteristics: elevation, sediment consolidation, spatial extent, patchiness, reef builder density, biodiversity, community structure, longevity and stability [guidelines to apply the EU reef-definition by Hendrick and Foster-Smith (J Mar Biol Assoc UK 86:665-677, 2006)]. This study investigates the physical and temporal characteristics of high density aggregations of L. conchilega. Results show that the elevation and sediment consolidation of the biogenic mounds was significantly higher compared to the surrounding unstructured sediment. Areas with L. conchilega aggregations tend to be extensive and patchiness is high (coverage 5-18%). The discussion of present study evaluates whether L. conchilega aggregations can be considered as reefs (discussing physical, biological and temporal characteristics). Individual aggregations were found to persist for several years if yearly renewal of existing aggregations through juvenile settlement occurred. This renewal is enhanced by local hydrodynamic changes and availability of attaching structures (adult tubes). We conclude that the application of the EU definition for reefs provides evidence that all physical and biological characteristics are present to classify L. conchilega as a reef builder. For temporal characteristics, this study shows several mechanisms exist for reefs to persist for a longer period of time. However, a direct evidence of long-lived individual reefs does not exist. As a range of aggregation development exists, ‘reefiness’ is not equal for all aggregations and a scoring table to quantify L. conchilega reefiness is presented.

  13. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  14. Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods

    NASA Astrophysics Data System (ADS)

    Kamimoto, Takeyuki

    2006-07-01

    Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed.

  15. CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrer, R.; Rhodes, J.; Smith, K.

    2012-07-01

    The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)

  16. Quantification of uncertainties for application in detonation simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Miao; Ma, Zhibo

    2016-06-01

    Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.

  17. ANALYTICAL METHODS DEVELOPMENT FOR DIETARY SAMPLES

    EPA Science Inventory

    The Microbiological and Chemical Exposure Assessment Research Division's (MCEARD) dietary exposure research program is conducted to complement the NERL aggregate and cumulative exposure program. Its purpose is to reduce the level of uncertainty in exposure assessment by improvin...

  18. Biological framework for soil aggregation: Implications for ecological functions.

    NASA Astrophysics Data System (ADS)

    Ghezzehei, Teamrat; Or, Dani

    2016-04-01

    Soil aggregation is heuristically understood as agglomeration of primary particles bound together by biotic and abiotic cementing agents. The organization of aggregates is believed to be hierarchical in nature; whereby primary particles bond together to form secondary particles and subsequently merge to form larger aggregates. Soil aggregates are not permanent structures, they continuously change in response to internal and external forces and other drivers, including moisture, capillary pressure, temperature, biological activity, and human disturbances. Soil aggregation processes and the resulting functionality span multiple spatial and temporal scales. The intertwined biological and physical nature of soil aggregation, and the time scales involved precluded a universally applicable and quantifiable framework for characterizing the nature and function of soil aggregation. We introduce a biophysical framework of soil aggregation that considers the various modes and factors of the genesis, maturation and degradation of soil aggregates including wetting/drying cycles, soil mechanical processes, biological activity and the nature of primary soil particles. The framework attempts to disentangle mechanical (compaction and soil fragmentation) from in-situ biophysical aggregation and provides a consistent description of aggregate size, hierarchical organization, and life time. It also enables quantitative description of biotic and abiotic functions of soil aggregates including diffusion and storage of mass and energy as well as role of aggregates as hot spots of nutrient accumulation, biodiversity, and biogeochemical cycles.

  19. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier-Stokes simulations: A data-driven, physics-informed Bayesian approach

    NASA Astrophysics Data System (ADS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes.

  20. Analysis of the uncertainty in the monetary valuation of ecosystem services--A case study at the river basin scale.

    PubMed

    Boithias, Laurie; Terrado, Marta; Corominas, Lluís; Ziv, Guy; Kumar, Vikas; Marqués, Montse; Schuhmacher, Marta; Acuña, Vicenç

    2016-02-01

    Ecosystem services provide multiple benefits to human wellbeing and are increasingly considered by policy-makers in environmental management. However, the uncertainty related with the monetary valuation of these benefits is not yet adequately defined or integrated by policy-makers. Given this background, our aim was to quantify different sources of uncertainty when performing monetary valuation of ecosystem services, in order to provide a series of guidelines to reduce them. With an example of 4 ecosystem services (i.e., water provisioning, waste treatment, erosion protection, and habitat for species) provided at the river basin scale, we quantified the uncertainty associated with the following sources: (1) the number of services considered, (2) the number of benefits considered for each service, (3) the valuation metrics (i.e. valuation methods) used to value benefits, and (4) the uncertainty of the parameters included in the valuation metrics. Results indicate that the highest uncertainty was caused by the number of services considered, as well as by the number of benefits considered for each service, whereas the parametric uncertainty was similar to the one related to the selection of valuation metric, thus suggesting that the parametric uncertainty, which is the only uncertainty type commonly considered, was less critical than the structural uncertainty, which is in turn mainly dependent on the decision-making context. Given the uncertainty associated to the valuation structure, special attention should be given to the selection of services, benefits and metrics according to a given context. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  2. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  3. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  4. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  5. Quantitative analysis of trace levels of surface contamination by X-ray photoelectron spectroscopy Part I: statistical uncertainty near the detection limit.

    PubMed

    Hill, Shannon B; Faradzhev, Nadir S; Powell, Cedric J

    2017-12-01

    We discuss the problem of quantifying common sources of statistical uncertainties for analyses of trace levels of surface contamination using X-ray photoelectron spectroscopy. We examine the propagation of error for peak-area measurements using common forms of linear and polynomial background subtraction including the correlation of points used to determine both background and peak areas. This correlation has been neglected in previous analyses, but we show that it contributes significantly to the peak-area uncertainty near the detection limit. We introduce the concept of relative background subtraction variance (RBSV) which quantifies the uncertainty introduced by the method of background determination relative to the uncertainty of the background area itself. The uncertainties of the peak area and atomic concentration and of the detection limit are expressed using the RBSV, which separates the contributions from the acquisition parameters, the background-determination method, and the properties of the measured spectrum. These results are then combined to find acquisition strategies that minimize the total measurement time needed to achieve a desired detection limit or atomic-percentage uncertainty for a particular trace element. Minimization of data-acquisition time is important for samples that are sensitive to x-ray dose and also for laboratories that need to optimize throughput.

  6. Assessing uncertainties in land cover projections.

    PubMed

    Alexander, Peter; Prestele, Reinhard; Verburg, Peter H; Arneth, Almut; Baranzelli, Claudia; Batista E Silva, Filipe; Brown, Calum; Butler, Adam; Calvin, Katherine; Dendoncker, Nicolas; Doelman, Jonathan C; Dunford, Robert; Engström, Kerstin; Eitelberg, David; Fujimori, Shinichiro; Harrison, Paula A; Hasegawa, Tomoko; Havlik, Petr; Holzhauer, Sascha; Humpenöder, Florian; Jacobs-Crisioni, Chris; Jain, Atul K; Krisztin, Tamás; Kyle, Page; Lavalle, Carlo; Lenton, Tim; Liu, Jiayi; Meiyappan, Prasanth; Popp, Alexander; Powell, Tom; Sands, Ronald D; Schaldach, Rüdiger; Stehfest, Elke; Steinbuks, Jevgenijs; Tabeau, Andrzej; van Meijl, Hans; Wise, Marshall A; Rounsevell, Mark D A

    2017-02-01

    Understanding uncertainties in land cover projections is critical to investigating land-based climate mitigation policies, assessing the potential of climate adaptation strategies and quantifying the impacts of land cover change on the climate system. Here, we identify and quantify uncertainties in global and European land cover projections over a diverse range of model types and scenarios, extending the analysis beyond the agro-economic models included in previous comparisons. The results from 75 simulations over 18 models are analysed and show a large range in land cover area projections, with the highest variability occurring in future cropland areas. We demonstrate systematic differences in land cover areas associated with the characteristics of the modelling approach, which is at least as great as the differences attributed to the scenario variations. The results lead us to conclude that a higher degree of uncertainty exists in land use projections than currently included in climate or earth system projections. To account for land use uncertainty, it is recommended to use a diverse set of models and approaches when assessing the potential impacts of land cover change on future climate. Additionally, further work is needed to better understand the assumptions driving land use model results and reveal the causes of uncertainty in more depth, to help reduce model uncertainty and improve the projections of land cover. © 2016 John Wiley & Sons Ltd.

  7. Estimation of Uncertainties for a Supersonic Retro-Propulsion Model Validation Experiment in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Rhode, Matthew N.; Oberkampf, William L.

    2012-01-01

    A high-quality model validation experiment was performed in the NASA Langley Research Center Unitary Plan Wind Tunnel to assess the predictive accuracy of computational fluid dynamics (CFD) models for a blunt-body supersonic retro-propulsion configuration at Mach numbers from 2.4 to 4.6. Static and fluctuating surface pressure data were acquired on a 5-inch-diameter test article with a forebody composed of a spherically-blunted, 70-degree half-angle cone and a cylindrical aft body. One non-powered configuration with a smooth outer mold line was tested as well as three different powered, forward-firing nozzle configurations: a centerline nozzle, three nozzles equally spaced around the forebody, and a combination with all four nozzles. A key objective of the experiment was the determination of experimental uncertainties from a range of sources such as random measurement error, flowfield non-uniformity, and model/instrumentation asymmetries. This paper discusses the design of the experiment towards capturing these uncertainties for the baseline non-powered configuration, the methodology utilized in quantifying the various sources of uncertainty, and examples of the uncertainties applied to non-powered and powered experimental results. The analysis showed that flowfield nonuniformity was the dominant contributor to the overall uncertainty a finding in agreement with other experiments that have quantified various sources of uncertainty.

  8. Mechanisms of Soil Aggregation: a biophysical modeling framework

    NASA Astrophysics Data System (ADS)

    Ghezzehei, T. A.; Or, D.

    2016-12-01

    Soil aggregation is one of the main crosscutting concepts in all sub-disciplines and applications of soil science from agriculture to climate regulation. The concept generally refers to adhesion of primary soil particles into distinct units that remain stable when subjected to disruptive forces. It is one of the most sensitive soil qualities that readily respond to disturbances such as cultivation, fire, drought, flooding, and changes in vegetation. These changes are commonly quantified and incorporated in soil models indirectly as alterations in carbon content and type, bulk density, aeration, permeability, as well as water retention characteristics. Soil aggregation that is primarily controlled by organic matter generally exhibits hierarchical organization of soil constituents into stable units that range in size from a few microns to centimeters. However, this conceptual model of soil aggregation as the key unifying mechanism remains poorly quantified and is rarely included in predictive soil models. Here we provide a biophysical framework for quantitative and predictive modeling of soil aggregation and its attendant soil characteristics. The framework treats aggregates as hotspots of biological, chemical and physical processes centered around roots and root residue. We keep track of the life cycle of an individual aggregate from it genesis in the rhizosphere, fueled by rhizodeposition and mediated by vigorous microbial activity, until its disappearance when the root-derived resources are depleted. The framework synthesizes current understanding of microbial life in porous media; water holding and soil binding capacity of biopolymers; and environmental controls on soil organic matter dynamics. The framework paves a way for integration of processes that are presently modeled as disparate or poorly coupled processes, including storage and protection of carbon, microbial activity, greenhouse gas fluxes, movement and storage of water, resistance of soils against erosion.

  9. Spatial Downscaling of Alien Species Presences using Machine Learning

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis N.; Katsanevakis, Stelios; Moustakas, Aristides

    2017-07-01

    Large scale, high-resolution data on alien species distributions are essential for spatially explicit assessments of their environmental and socio-economic impacts, and management interventions for mitigation. However, these data are often unavailable. This paper presents a method that relies on Random Forest (RF) models to distribute alien species presence counts at a finer resolution grid, thus achieving spatial downscaling. A sufficiently large number of RF models are trained using random subsets of the dataset as predictors, in a bootstrapping approach to account for the uncertainty introduced by the subset selection. The method is tested with an approximately 8×8 km2 grid containing floral alien species presence and several indices of climatic, habitat, land use covariates for the Mediterranean island of Crete, Greece. Alien species presence is aggregated at 16×16 km2 and used as a predictor of presence at the original resolution, thus simulating spatial downscaling. Potential explanatory variables included habitat types, land cover richness, endemic species richness, soil type, temperature, precipitation, and freshwater availability. Uncertainty assessment of the spatial downscaling of alien species’ occurrences was also performed and true/false presences and absences were quantified. The approach is promising for downscaling alien species datasets of larger spatial scale but coarse resolution, where the underlying environmental information is available at a finer resolution than the alien species data. Furthermore, the RF architecture allows for tuning towards operationally optimal sensitivity and specificity, thus providing a decision support tool for designing a resource efficient alien species census.

  10. Quantifying allometric model uncertainty for plot-level live tree biomass stocks with a data-driven, hierarchical framework

    Treesearch

    Brian J. Clough; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall

    2016-01-01

    Accurate uncertainty assessments of plot-level live tree biomass stocks are an important precursor to estimating uncertainty in annual national greenhouse gas inventories (NGHGIs) developed from forest inventory data. However, current approaches employed within the United States’ NGHGI do not specifically incorporate methods to address error in tree-scale biomass...

  11. Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian

    2011-01-01

    The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.

  12. BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)

    EPA Science Inventory

    We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...

  13. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    NASA Astrophysics Data System (ADS)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  14. Using geostatistics to evaluate cleanup goals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcon, M.F.; Hopkins, L.P.

    1995-12-01

    Geostatistical analysis is a powerful predictive tool typically used to define spatial variability in environmental data. The information from a geostatistical analysis using kriging, a geostatistical. tool, can be taken a step further to optimize sampling location and frequency and help quantify sampling uncertainty in both the remedial investigation and remedial design at a hazardous waste site. Geostatistics were used to quantify sampling uncertainty in attainment of a risk-based cleanup goal and determine the optimal sampling frequency necessary to delineate the horizontal extent of impacted soils at a Gulf Coast waste site.

  15. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  16. The formulation and immunogenicity of therapeutic proteins: Product quality as a key factor.

    PubMed

    Richard, Joel; Prang, Nadia

    2010-08-01

    The formation of anti-drug antibodies represents a risk that should be assessed carefully during biopharmaceutical drug product (DP) development, as such antibodies compromise safety and efficacy and may alter the pharmacokinetic properties of a compound. This feature review discusses immunogenicity issues in biopharmaceutical DP development, with a focus on product quality. Excipient-induced and aggregate-induced immunogenicity are reviewed based on the concepts of 'aggregation-competent' species and 'provocative' aggregates. In addition, the influence of formulation parameters, such as particulates and contaminants appearing in the DP during processing and storage, on aggregate-induced immunogenicity are presented, including the role of fill-and-finish equipments and the effect of interactions with container materials. Furthermore, methods to detect and quantify aggregation and precursor conformational changes in a protein formulation are reviewed, and immunological mechanisms that may lead to aggregate-induced immunogenicity are proposed and discussed.

  17. Satellite-based drought monitoring in Kenya in an operational setting

    NASA Astrophysics Data System (ADS)

    Klisch, A.; Atzberger, C.; Luminari, L.

    2015-04-01

    The University of Natural Resources and Life Sciences (BOKU) in Vienna (Austria) in cooperation with the National Drought Management Authority (NDMA) in Nairobi (Kenya) has setup an operational processing chain for mapping drought occurrence and strength for the territory of Kenya using the Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI at 250 m ground resolution from 2000 onwards. The processing chain employs a modified Whittaker smoother providing consistent NDVI "Mondayimages" in near real-time (NRT) at a 7-daily updating interval. The approach constrains temporally extrapolated NDVI values based on reasonable temporal NDVI paths. Contrary to other competing approaches, the processing chain provides a modelled uncertainty range for each pixel and time step. The uncertainties are calculated by a hindcast analysis of the NRT products against an "optimum" filtering. To detect droughts, the vegetation condition index (VCI) is calculated at pixel level and is spatially aggregated to administrative units. Starting from weekly temporal resolution, the indicator is also aggregated for 1- and 3-monthly intervals considering available uncertainty information. Analysts at NDMA use the spatially/temporally aggregated VCI and basic image products for their monthly bulletins. Based on the provided bio-physical indicators as well as a number of socio-economic indicators, contingency funds are released by NDMA to sustain counties in drought conditions. The paper shows the successful application of the products within NDMA by providing a retrospective analysis applied to droughts in 2006, 2009 and 2011. Some comparisons with alternative products (e.g. FEWS NET, the Famine Early Warning Systems Network) highlight main differences.

  18. Bayesian focalization: quantifying source localization with environmental uncertainty.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2007-05-01

    This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.

  19. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  20. Modelling carbon in permafrost soils from preindustrial to the future

    NASA Astrophysics Data System (ADS)

    Kleinen, T.; Brovkin, V.

    2015-12-01

    The carbon release from thawing permafrost soils constitutes one of the large uncertainties in the carbon cycle under future climate change. Analysing the problem further, this uncertainty results from an uncertainty about the total amount of C that is stored in frozen soils, combined with an uncertainty about the areas where soils might thaw under a particular climate change scenario, as well as an uncertainty about the decomposition product since some of the decomposed C might result the release of CH4 as well as CO2. We use the land surface model JSBACH, part of the Max Planck Institute Earth System Model MPI-ESM, to quantify the release of soil carbon from thawing permafrost soils. We have extended the soil carbon model YASSO by introducing carbon storages in frozen soils, with increasing fractions of C being available to decomposition as permafrost thaws. In order to quantify the amount of carbon released as CH4, as opposed to CO2, we have also implemented a TOPMODEL-based wetland scheme, as well as anaerobic C decomposition and methane transport. We initialise the soil C pools for the preindustrial climate state from the Northern Circumpolar Soil Carbon Database to insure initial C pool sizes close to measurements. We then determine changes in soil C storage in transient model experiments following historical and future climate changes under RCP 8.5. Based on these experiments, we quantify the greenhouse gas release from permafrost C decomposition, determining both CH4 and CO2 emissions.

  1. DIETARY EXPOSURE METHODS AND MODELS

    EPA Science Inventory

    The research reported in this task description constitutes the MCEARD base dietary exposure research program and is conducted to complement the NERL aggregate and cumulative exposure program. Its purpose is to reduce the level of uncertainty in exposure assessment by improving N...

  2. Probabilistic prediction of aggregate traffic demand using uncertainty in individual flight predictions.

    DOT National Transportation Integrated Search

    2009-08-01

    Federal Aviation Administration (FAA) air traffic flow management (TFM) : decision-making is based primarily on a comparison of deterministic predictions of demand : and capacity at National Airspace System (NAS) elements such as airports, fixes and ...

  3. Effect of aggregation on SOC transport: linking soil properties to sediment organic matter

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus J.

    2016-04-01

    Soils are an interface between the Earth's spheres and shaped by the nature of the interaction between them. The relevance of soil properties for the nature of the interaction between atmosphere, hydrosphere and biosphere is well-studied and accepted, on point- or ecotone-scale. However, this understanding of the largely vertical connections between spheres is not matched by a similar recognition of soil properties affecting processes acting largely in a lateral way across the land surface, such as erosion, transport and deposition of soil and the associated organic matter. Understanding the redistribution of eroded soil organic matter falls into several disciplines, most notably soil science, agronomy, hydrology and geomorphology, and recently into biogeochemistry. Accordingly, the way soil and sediment are described differs: in soil science, aggregation and structure are essential properties, while most process-based soil erosion models treat soil as a mixture of individual mineral grains, based on concepts derived in fluvial geomorphology or civil engineering. The actual behavior of aggregated sediment and the associated organic matter is not reflected by either approach and difficult to capture due to the dynamic nature of aggregation, especially in an environment such as running water. Still, a proxy to assess the uncertainties introduced by aggregation on the behavior of soil/sediment organic while moving in water across landscapes and into the aquatic system would represent a major step forward. To develop such a proxy, a database collating relevant soil, organic matter and sediment properties could serve as an initial step to identify which soil types and erosion scenarios are prone to generate a high uncertainty compared to the use of soil texture in erosion models. Furthermore, it could serve to develop standardized analytical procedures for appropriate description of soil and organic matter as sediment.

  4. Joint projections of sea level and storm surge using a flood index

    NASA Astrophysics Data System (ADS)

    Little, C. M.; Lin, N.; Horton, R. M.; Kopp, R. E.; Oppenheimer, M.

    2016-02-01

    Capturing the joint influence of sea level rise (SLR) and tropical cyclones (TCs) on future coastal flood risk poses significant challenges. To address these difficulties, Little et al. (2015) use a proxy of tropical cyclone activity and a probabilistic flood index that aggregates flood height and duration over a wide area (the US East and Gulf coasts). This technique illuminates the individual impacts of TCs and SLR and their correlation across different coupled climate models. By 2080-2099, changes in the flood index relative to 1986-2005 are substantial and positively skewed: a 10th-90th percentile range of 35-350x higher for a high-end business-as-usual emissions scenario (see figure). This aggregated flood index: 1) is a means to consistently combine TC-driven storm surges and SLR; 2) provides a more robust description of historical surge-climate relationships than is available at any one location; and 3) allows the incorporation of a larger climate model ensemble - which is critical to uncertainty characterization. It does not provide a local view of the complete spectrum of flood severity (i.e. return curves). However, alternate techniques that provide localized return curves (e.g. Lin et al., 2012) are computationally intensive, limiting the set of large-scale climate models that can be incorporated, and require several linked statistical and dynamical models, each with structural uncertainties that are difficult to quantify. Here, we present the results of Little et al. (2015) along with: 1) alternate formulations of the flood index; 2) strategies to localize the flood index; and 3) a comparison of flood index projections to those provided by model-based return curves. We look to this interdisciplinary audience for feedback on the advantages and disadvantages of each tool for coastal planning and decision-making. Lin, N., K. Emanuel, M. Oppenheimer, and E. Vanmarcke, 2012: Physically based assessment of hurricane surge threat under climate change. Nature Clim. Change, 2(6), 462-467. Little, C. M., R. M. Horton, R. E. Kopp, M. Oppenheimer, G. A. Vecchi, and G. Villarini, 2015: Joint projections of us east coast sea level and storm surge. Nature Clim. Change, advance online publication.

  5. Quantifying uncertainty in read-across assessment – an algorithmic approach - (SOT)

    EPA Science Inventory

    Read-across is a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across remains an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithm...

  6. The evaluation of meta-analysis techniques for quantifying prescribed fire effects on fuel loadings.

    Treesearch

    Karen E. Kopper; Donald McKenzie; David L. Peterson

    2009-01-01

    Models and effect-size metrics for meta-analysis were compared in four separate meta-analyses quantifying surface fuels after prescribed fires in ponderosa pine (Pinus ponderosa Dougl. ex Laws.) forests of the Western United States. An aggregated data set was compiled from eight published reports that contained data from 65 fire treatment units....

  7. Statistical uncertainties of a chiral interaction at next-to-next-to leading order

    DOE PAGES

    Ekström, A.; Carlsson, B. D.; Wendt, K. A.; ...

    2015-02-05

    In this paper, we have quantified the statistical uncertainties of the low-energy coupling-constants (LECs) of an optimized nucleon–nucleon interaction from chiral effective field theory at next-to-next-to-leading order. Finally, in addition, we have propagated the impact of the uncertainties of the LECs to two-nucleon scattering phase shifts, effective range parameters, and deuteron observables.

  8. Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement.

    PubMed

    Paciorek, Christopher J; Goring, Simon J; Thurman, Andrew L; Cogbill, Charles V; Williams, John W; Mladenoff, David J; Peters, Jody A; Zhu, Jun; McLachlan, Jason S

    2016-01-01

    We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.

  9. Using analogues to quantify geological uncertainty in stochastic reserve modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, B.; Brown, I.

    1995-08-01

    The petroleum industry seeks to minimize exploration risk by employing the best possible expertise, methods and tools. Is it possible to quantify the success of this process of risk reduction? Due to inherent uncertainty in predicting geological reality and due to changing environments for hydrocarbon exploration, it is not enough simply to record the proportion of successful wells drilled; in various parts of the world it has been noted that pseudo-random drilling would apparently have been as successful as the actual drilling programme. How, then, should we judge the success of risk reduction? For many years the E&P industry hasmore » routinely used Monte Carlo modelling to generate a probability distribution for prospect reserves. One aspect of Monte Carlo modelling which has received insufficient attention, but which is essential for quantifying risk reduction, is the consistency and repeatability with which predictions can be made. Reducing the subjective element inherent in the specification of geological uncertainty allows better quantification of uncertainty in the prediction of reserves, in both exploration and appraisal. Building on work reported at the AAPG annual conventions in 1994 and 1995, the present paper incorporates analogue information with uncertainty modelling. Analogues provide a major step forward in the quantification of risk, but their significance is potentially greater still. The two principal contributors to uncertainty in field and prospect analysis are the hydrocarbon life-cycle and the geometry of the trap. These are usually treated separately. Combining them into a single model is a major contribution to the reduction risk. This work is based in part on a joint project with Oryx Energy UK Ltd., and thanks are due in particular to Richard Benmore and Mike Cooper.« less

  10. 3D Micro-tomography on Aggregates from the 2014- 2015 Eruption of Hunga Tonga-Hunga Ha'apai Volcano

    NASA Astrophysics Data System (ADS)

    Colombier, M.; Scheu, B.; Cronin, S. J.; Tost, M.; Dobson, K. J.; Dingwell, D. B.

    2016-12-01

    In December 2014- January 2015, a surtseyan eruption at Hunga Tonga-Hunga Ha'apai volcano (Tonga) formed a new island. Three main eruptive phases were distinguished by observation and deposits: (i) mound and cone construction, involving collapse of 300-600 m-high wet tephra jets, grain flows, slope-remobilisation and energetic surges, with little or no convective plume (ii) The upper cone-building phase with lower jets (mainly <300 m) but greater ash production (weak, steam-rich plumes to 6 km) and weak surges, and (iii) final phase with weak surge, fall and ballistic deposits with more vesicular pyroclasts producing proximal capping deposits. Most sampled deposits contain ash, lapilli and bombs, and lapilli-sized aggregates are ubiquitous. We used high-resolution 3D X-ray microcomputed tomography (XCT) to quantify the grain size distribution (GSD) and porosity by sampling multiple stratigraphic units within the main eruptive sequences. We visualized and quantified the internal structure of the aggregates to understand the evolution of this surtseyan eruption. We present here an overview of the textural information: porosity, vesicle size distribution and morphology as well as the variability of the aggregation features. Aggregates from the fall deposits of the early wet phase are mostly loosely packed, poorly-structured ash clusters. Aggregates from the early surge sequence and the main cone building phase dominantly exhibit a central particle coated by ash cluster material. Vesicles in the particles from the early fall deposits tend to be smaller and more isolated than in the particles from the surge sequence and the main cone building phase. The GSD of aggregates obtained by XCT is highly valuable to correct the total GSD of volcaniclastic deposits. The strong variations in the aggregation features across the eruption suggest a range of different formation and deposition mechanisms related to varying degrees of magma-water-interaction, which changed the morphology and textural properties of the individual particles.

  11. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations.

    PubMed

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-11-01

    With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. An approach to forecasting health expenditures, with application to the U.S. Medicare system.

    PubMed

    Lee, Ronald; Miller, Timoth

    2002-10-01

    To quantify uncertainty in forecasts of health expenditures. Stochastic time series models are estimated for historical variations in fertility, mortality, and health spending per capita in the United States, and used to generate stochastic simulations of the growth of Medicare expenditures. Individual health spending is modeled to depend on the number of years until death. A simple accounting model is developed for forecasting health expenditures, using the U.S. Medicare system as an example. Medicare expenditures are projected to rise from 2.2 percent of GDP (gross domestic product) to about 8 percent of GDP by 2075. This increase is due in equal measure to increasing health spending per beneficiary and to population aging. The traditional projection method constructs high, medium, and low scenarios to assess uncertainty, an approach that has many problems. Using stochastic forecasting, we find a 95 percent probability that Medicare spending in 2075 will fall between 4 percent and 18 percent of GDP, indicating a wide band of uncertainty. Although there is substantial uncertainty about future mortality decline, it contributed little to uncertainty about future Medicare spending, since lower mortality both raises the number of elderly, tending to raise spending, and is associated with improved health of the elderly, tending to reduce spending. Uncertainty about fertility, by contrast, leads to great uncertainty about the future size of the labor force, and therefore adds importantly to uncertainty about the health-share of GDP. In the shorter term, the major source of uncertainty is health spending per capita. History is a valuable guide for quantifying our uncertainty about future health expenditures. The probabilistic model we present has several advantages over the high-low scenario approach to forecasting. It indicates great uncertainty about future Medicare expenditures relative to GDP.

  13. Techniques for analyses of trends in GRUAN data

    NASA Astrophysics Data System (ADS)

    Bodeker, G. E.; Kremser, S.

    2015-04-01

    The Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) provides reference quality RS92 radiosonde measurements of temperature, pressure and humidity. A key attribute of reference quality measurements, and hence GRUAN data, is that each datum has a well characterized and traceable estimate of the measurement uncertainty. The long-term homogeneity of the measurement records, and their well characterized uncertainties, make these data suitable for reliably detecting changes in global and regional climate on decadal time scales. Considerable effort is invested in GRUAN operations to (i) describe and analyse all sources of measurement uncertainty to the extent possible, (ii) quantify and synthesize the contribution of each source of uncertainty to the total measurement uncertainty, and (iii) verify that the evaluated net uncertainty is within the required target uncertainty. However, if the climate science community is not sufficiently well informed on how to capitalize on this added value, the significant investment in estimating meaningful measurement uncertainties is largely wasted. This paper presents and discusses the techniques that will need to be employed to reliably quantify long-term trends in GRUAN data records. A pedagogical approach is taken whereby numerical recipes for key parts of the trend analysis process are explored. The paper discusses the construction of linear least squares regression models for trend analysis, boot-strapping approaches to determine uncertainties in trends, dealing with the combined effects of autocorrelation in the data and measurement uncertainties in calculating the uncertainty on trends, best practice for determining seasonality in trends, how to deal with co-linear basis functions, and interpreting derived trends. Synthetic data sets are used to demonstrate these concepts which are then applied to a first analysis of temperature trends in RS92 radiosonde upper air soundings at the GRUAN site at Lindenberg, Germany (52.21° N, 14.12° E).

  14. Techniques for analyses of trends in GRUAN data

    NASA Astrophysics Data System (ADS)

    Bodeker, G. E.; Kremser, S.

    2014-12-01

    The Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) provides reference quality RS92 radiosonde measurements of temperature, pressure and humidity. A key attribute of reference quality measurements, and hence GRUAN data, is that each datum has a well characterised and traceable estimate of the measurement uncertainty. The long-term homogeneity of the measurement records, and their well characterised uncertainties, make these data suitable for reliably detecting changes in global and regional climate on decadal time scales. Considerable effort is invested in GRUAN operations to (i) describe and analyse all sources of measurement uncertainty to the extent possible, (ii) quantify and synthesize the contribution of each source of uncertainty to the total measurement uncertainty, and (iii) verify that the evaluated net uncertainty is within the required target uncertainty. However, if the climate science community is not sufficiently well informed on how to capitalize on this added value, the significant investment in estimating meaningful measurement uncertainties is largely wasted. This paper presents and discusses the techniques that will need to be employed to reliably quantify long-term trends in GRUAN data records. A pedagogical approach is taken whereby numerical recipes for key parts of the trend analysis process are explored. The paper discusses the construction of linear least squares regression models for trend analysis, boot-strapping approaches to determine uncertainties in trends, dealing with the combined effects of autocorrelation in the data and measurement uncertainties in calculating the uncertainty on trends, best practice for determining seasonality in trends, how to deal with co-linear basis functions, and interpreting derived trends. Synthetic data sets are used to demonstrate these concepts which are then applied to a first analysis of temperature trends in RS92 radiosonde upper air soundings at the GRUAN site at Lindenberg, Germany (52.21° N, 14.12° E).

  15. DIETARY INTAKE OF YOUNG CHILDREN

    EPA Science Inventory

    Dietary exposure research supports the requirements of the Food Quality Protection Act (FQPA) of 1996 by improving methods of aggregate and cumulative exposure assessments for children. The goal of this research is to reduce the level of uncertainty in assessing the dietary path...

  16. Uncertainty of chromatic dispersion estimation from transmitted waveforms in direct detection systems

    NASA Astrophysics Data System (ADS)

    Lach, Zbigniew T.

    2017-08-01

    A possibility is shown of a non-disruptive estimation of chromatic dispersion in a fiber of an intensity modulation communication line under work conditions. Uncertainty of the chromatic dispersion estimates is analyzed and quantified with the use of confidence intervals.

  17. Detectability and Interpretational Uncertainties: Considerations in Gauging the Impacts of Land Disturbance on Streamflow

    EPA Science Inventory

    Hydrologic impacts of land disturbance and management can be confounded by rainfall variability. As a consequence, attempts to gauge and quantify these effects through streamflow monitoring are typically subject to uncertainties. This paper addresses the quantification and deline...

  18. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  19. Fourier analysis of mitochondrial distribution in oocytes

    NASA Astrophysics Data System (ADS)

    Hollmann, Joseph L.; Brooks, Dana H.; Newmark, Judith A.; Warner, Carol M.; DiMarzio, Charles A.

    2011-03-01

    This paper describes a novel approach to quantifying mitochondrial patterns which are typically described using the qualitative terms "diffuse" "aggregated" and are potentially key indicators for an oocyte's health and survival potential post-implantation. An oocyte was isolated in a confocal image and a coarse grid was superimposed upon it. The spatial spectrum was calculated and an aggregation factor was generated. A classifier for healthy cells was developed and verified. The aggregation factor showed a clear distinction between the healthy and unhealthy oocytes. The ultimate goal is to screen oocytes for viability preimplantation, thus improving the outcome of in vitro fertilization (IVF) treatments.

  20. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.

  1. Emotion and Decision-Making Under Uncertainty: Physiological arousal predicts increased gambling during ambiguity but not risk

    PubMed Central

    FeldmanHall, Oriel; Glimcher, Paul; Baker, Augustus L; Phelps, Elizabeth A

    2016-01-01

    Uncertainty, which is ubiquitous in decision-making, can be fractionated into known probabilities (risk) and unknown probabilities (ambiguity). Although research illustrates that individuals more often avoid decisions associated with ambiguity compared to risk, it remains unclear why ambiguity is perceived as more aversive. Here we examine the role of arousal in shaping the representation of value and subsequent choice under risky and ambiguous decisions. To investigate the relationship between arousal and decisions of uncertainty, we measure skin conductance response—a quantifiable measure reflecting sympathetic nervous system arousal—during choices to gamble under risk and ambiguity. To quantify the discrete influences of risk and ambiguity sensitivity and the subjective value of each option under consideration, we model fluctuating uncertainty, as well as the amount of money that can be gained by taking the gamble. Results reveal that while arousal tracks the subjective value of a lottery regardless of uncertainty type, arousal differentially contributes to the computation of value—i.e. choice—depending on whether the uncertainty is risky or ambiguous: enhanced arousal adaptively decreases risk-taking only when the lottery is highly risky but increases risk-taking when the probability of winning is ambiguous (even after controlling for subjective value). Together, this suggests that the role of arousal during decisions of uncertainty is modulatory and highly dependent on the context in which the decision is framed. PMID:27690508

  2. Quantifying Uncertainty of Wind Power Production Through an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Shahriari, M.; Cervone, G.

    2016-12-01

    The Analog Ensemble (AnEn) method is used to generate probabilistic weather forecasts that quantify the uncertainty in power estimates at hypothetical wind farm locations. The data are from the NREL Eastern Wind Dataset that includes more than 1,300 modeled wind farms. The AnEn model uses a two-dimensional grid to estimate the probability distribution of wind speed (the predictand) given the values of predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind. The meteorological data is taken from the NCEP GFS which is available on a 0.25 degree grid resolution. The methodology first divides the data into two classes: training period and verification period. The AnEn selects a point in the verification period and searches for the best matching estimates (analogs) in the training period. The predictand value at those analogs are the ensemble prediction for the point in the verification period. The model provides a grid of wind speed values and the uncertainty (probability index) associated with each estimate. Each wind farm is associated with a probability index which quantifies the degree of difficulty to estimate wind power. Further, the uncertainty in estimation is related to other factors such as topography, land cover and wind resources. This is achieved by using a GIS system to compute the correlation between the probability index and geographical characteristics. This study has significant applications for investors in renewable energy sector especially wind farm developers. Lower level of uncertainty facilitates the process of submitting bids into day ahead and real time electricity markets. Thus, building wind farms in regions with lower levels of uncertainty will reduce the real-time operational risks and create a hedge against volatile real-time prices. Further, the links between wind estimate uncertainty and factors such as topography and wind resources, provide wind farm developers with valuable information regarding wind farm siting.

  3. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.

  4. The Effects of Air-Cooled Blast Furnace Slag (ACBFS) Aggregate on the Chemistry of Pore Solution and the Interfacial Transition Zone

    NASA Astrophysics Data System (ADS)

    Panchmatia, Parth

    Numerous laboratory and field studies have demonstrated that concrete incorporating air cooled blast furnace slag (ACBFS) aggregate showed a higher degree of infilling of voids with ettringite as opposed to concrete prepared using naturally mined carbonate aggregates when exposed to similar environmental conditions. This observation prompted some to link the deterioration observed in the ACBFS aggregate concrete structures to the compromised freeze-thaw resistance due to infilling of air voids. Concerns about the release of sulfur from ACBFS aggregate into the pore solution of concrete had been presented as the reason for the observed ettringite deposits in the air voids. However, literature quantifying the influence of ACBFS aggregate on the chemistry of the pore solution of concrete is absent. Therefore, the main purpose of this research was to quantify the effects of ACBFS aggregate on the chemistry of the pore solution of mortars incorporating them. Coarse and crushed ACBFS aggregates were submerged in artificial pore solutions (APSs) representing pore solutions of 3-day, 7-day, and 28-day hydrated plain, binary, and ternary paste systems. The change in the chemistry of these artificial pore solutions was recorded to quantify the chemical contribution of ACBFS aggregate to the pore solution of concrete. It was observed that the sulfate concentration of all APSs increased once they were in contact with either coarse or crushed ACBFS aggregate. After 28 days of contact, the increase in sulfate concentration of the APSs ranged from 4.85 - 12.23 mmol/L and 14.21 - 16.87 mmol/L for contact with coarse and crushed ACBFS aggregate, respectively. More than 40% of the total sulfate that was released by the ACBFS aggregate occurred during the first 72 hours (3 days) of its contact with the APSs. There was little or no difference in the amount of sulfate released from ACBFS aggregate in the different types of APSs. In other words, the type of binder solution from which pore solution was extracted had no effect on the amount of sulfate that was released when it was in contact with ACBFS aggregate. The relatively quick release of sulfur from ACBFS aggregate into the APSs prompted investigation of the chemical composition of the pore solution of mortar (at early stages of hydration) incorporating ACBFS aggregate. The chemical composition of the pore solutions obtained from mortars prepared using ACBFS aggregate and plain and binary paste matrices was compared those of mortars prepared using Ottawa sand and plain and binary paste matrices. After 7 days of hydration, the sulfur (S) concentration of the pore solution extracted from mortars prepared using ACBFS aggregate was 3.4 - 5.6 times greater than that obtained from corresponding mortars (i.e. mortars with the same paste matrix) prepared using Ottawa sand. Binary mortars containing fly ash (FA) showed the lowest S content after 7 days of hydration amongst all mortars prepared using ACBFS aggregate. On the other hand, binary mortars prepared using slag cement (SC) and ACBFS aggregate had the highest S concentration after 7 days of hydration. These effects on the S concentration in the pore solutions can be explained by the difference in the chemical makeup of the binders, and not because of different rate of release of S from ACBFS into the pore solution. In addition, TGA analysis of 7-day hydrated mortars revealed that the ettringite, monosulfate, and calcium hydroxide content was lower in mortars prepared using ACBFS aggregate as opposed to those prepared using Ottawa sand. This could be because of the low degree of hydration in mortars with ACBFS aggregate because of the high sulfate concentration in its pore solution. The properties of the interfacial transition zone (ITZ), i.e. the zone in the vicinity of the aggregate surface, depends on the property of the aggregate such as its porosity and texture. Therefore, it is expected that the properties of ITZ around the ACBFS particle, which is porous and proven to contribute sulfate, be different from the ITZ around the naturally mined siliceous aggregate. Image analysis conducted on backscattered images obtained using scanning electron microscope revealed that the ITZ of naturally mined siliceous aggregate was more porous compared to the ITZ of ACBFS aggregate. In addition, calcium hydroxide deposits were more frequent and larger in size in the ITZ around siliceous sand than in the case of the ITZ around the ACBFS aggregate.

  5. Synchronization as Aggregation: Cluster Kinetics of Pulse-Coupled Oscillators.

    PubMed

    O'Keeffe, Kevin P; Krapivsky, P L; Strogatz, Steven H

    2015-08-07

    We consider models of identical pulse-coupled oscillators with global interactions. Previous work showed that under certain conditions such systems always end up in sync, but did not quantify how small clusters of synchronized oscillators progressively coalesce into larger ones. Using tools from the study of aggregation phenomena, we obtain exact results for the time-dependent distribution of cluster sizes as the system evolves from disorder to synchrony.

  6. Attachment and biofilm formation by various serotypes of Salmonella as influenced by cellulose production and thin aggregative fimbriae biosynthesis.

    PubMed

    Jain, Sudeep; Chen, Jinru

    2007-11-01

    This study was undertaken to quantify thin aggregative fimbriae and cellulose produced by Salmonella and to evaluate their roles in attachment and biofilm formation on polystyrene and glass surfaces. Thin aggregative fimbriae and cellulose produced by four wild-type and two pairs of Salmonella, representing four different colony morphotypes (rdar: red, dry, and rough; pdar: pink, dry, and rough; bdar: brown, dry, and rough; and saw: smooth and white), were quantified. The ability of the Salmonella cells to attach and form biofilms on the selected surfaces was evaluated in Luria-Bertani (LB) broth with or without salt (0.5%) or glucose (2%) at 28 degrees C during a 7-day period. The cells expressing the rdar or pdar colony morphotypes produced significantly greater amounts of thin aggregative fimbriae and cellulose on LB no salt agar, respectively. The cells expressing the rdar colony morphotype attached in higher numbers and formed more biofilm than did the cells expressing the pdar colony morphotype. The members of the pairs expressing the bdar colony morphotype attached more efficiently and formed more biofilm on the tested surfaces than did their counterparts expressing the saw colony morphotype. These results indicated that thin aggregative fimbriae impart attachment ability to Salmonella and, upon coexpression with cellulose, enhance biofilm formation on certain abiotic surfaces. The knowledge acquired in the study may help develop better cleaning strategies for food processing equipment.

  7. Progress of Aircraft System Noise Assessment with Uncertainty Quantification for the Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.

  8. Where do uncertainties reside within environmental risk assessments? Testing UnISERA, a guide for uncertainty assessment.

    PubMed

    Skinner, Daniel J C; Rocks, Sophie A; Pollard, Simon J T

    2017-06-01

    A means for identifying and prioritising the treatment of uncertainty (UnISERA) in environmental risk assessments (ERAs) is tested, using three risk domains where ERA is an established requirement and one in which ERA practice is emerging. UnISERA's development draws on 19 expert elicitations across genetically modified higher plants, particulate matter, and agricultural pesticide release and is stress tested here for engineered nanomaterials (ENM). We are concerned with the severity of uncertainty; its nature; and its location across four accepted stages of ERAs. Using an established uncertainty scale, the risk characterisation stage of ERA harbours the highest severity level of uncertainty, associated with estimating, aggregating and evaluating expressions of risk. Combined epistemic and aleatory uncertainty is the dominant nature of uncertainty. The dominant location of uncertainty is associated with data in problem formulation, exposure assessment and effects assessment. Testing UnISERA produced agreements of 55%, 90%, and 80% for the severity level, nature and location dimensions of uncertainty between the combined case studies and the ENM stress test. UnISERA enables environmental risk analysts to prioritise risk assessment phases, groups of tasks, or individual ERA tasks and it can direct them towards established methods for uncertainty treatment. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN

    EPA Science Inventory

    In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...

  10. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  11. Spatial variability versus parameter uncertainty in freshwater fate and exposure factors of chemicals.

    PubMed

    Nijhof, Carl O P; Huijbregts, Mark A J; Golsteijn, Laura; van Zelm, Rosalie

    2016-04-01

    We compared the influence of spatial variability in environmental characteristics and the uncertainty in measured substance properties of seven chemicals on freshwater fate factors (FFs), representing the residence time in the freshwater environment, and on exposure factors (XFs), representing the dissolved fraction of a chemical. The influence of spatial variability was quantified using the SimpleBox model in which Europe was divided in 100 × 100 km regions, nested in a regional (300 × 300 km) and supra-regional (500 × 500 km) scale. Uncertainty in substance properties was quantified by means of probabilistic modelling. Spatial variability and parameter uncertainty were expressed by the ratio k of the 95%ile and 5%ile of the FF and XF. Our analysis shows that spatial variability ranges in FFs of persistent chemicals that partition predominantly into one environmental compartment was up to 2 orders of magnitude larger compared to uncertainty. For the other (less persistent) chemicals, uncertainty in the FF was up to 1 order of magnitude larger than spatial variability. Variability and uncertainty in freshwater XFs of the seven chemicals was negligible (k < 1.5). We found that, depending on the chemical and emission scenario, accounting for region-specific environmental characteristics in multimedia fate modelling, as well as accounting for parameter uncertainty, can have a significant influence on freshwater fate factor predictions. Therefore, we conclude that it is important that fate factors should not only account for parameter uncertainty, but for spatial variability as well, as this further increases the reliability of ecotoxicological impacts in LCA. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    NASA Astrophysics Data System (ADS)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.

  13. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the probability of flooding of a certain area, based on the uncertainty assessment of the flood forecasts. By using this type of maps, water managers can focus their attention on the areas with the highest flood probability. Also the larger public can consult these maps for information on the probability of flooding for their specific location, such that they can take pro-active measures to reduce the personal damage. The method of quantifying the uncertainty was implemented in the operational flood forecasting system for the navigable rivers in the Flanders region of Belgium. The method has shown clear benefits during the floods of the last two years.

  14. Reconstructing Exposures from Biomarkers using Exposure-Pharmacokinetic Modeling - A Case Study with Carbaryl

    EPA Science Inventory

    Sources of uncertainty involved in exposure reconstruction for a short half-life chemical, carbaryl, were characterized using the Cumulative and Aggregate Risk Evaluation System (CARES), an exposure model, and a human physiologically based pharmacokinetic (PBPK) model. CARES was...

  15. 77 FR 42654 - Trifloxystrobin; Pesticide Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-20

    ... code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This... filing. III. Aggregate Risk Assessment and Determination of Safety Section 408(b)(2)(A)(i) of FFDCA... dose at which adverse effects of concern are identified (the LOAEL). Uncertainty/safety factors are...

  16. Quantifying the Influence of Dynamics Across Scales on Regional Climate Uncertainty in Western North America

    NASA Astrophysics Data System (ADS)

    Goldenson, Naomi L.

    Uncertainties in climate projections at the regional scale are inevitably larger than those for global mean quantities. Here, focusing on western North American regional climate, several approaches are taken to quantifying uncertainties starting with the output of global climate model projections. Internal variance is found to be an important component of the projection uncertainty up and down the west coast. To quantify internal variance and other projection uncertainties in existing climate models, we evaluate different ensemble configurations. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find internal variability can be quantified consistently using a large ensemble or an ensemble of opportunity that includes small ensembles from multiple models and climate scenarios. The latter offers the advantage of also producing estimates of uncertainty due to model differences. We conclude that climate projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible. We then conduct a small single-model ensemble of simulations using the Model for Prediction Across Scales with physics from the Community Atmosphere Model Version 5 (MPAS-CAM5) and prescribed historical sea surface temperatures. In the global variable resolution domain, the finest resolution (at 30 km) is in our region of interest over western North America and upwind over the northeast Pacific. In the finer-scale region, extreme precipitation from atmospheric rivers (ARs) is connected to tendencies in seasonal snowpack in mountains of the Northwest United States and California. In most of the Cascade Mountains, winters with more AR days are associated with less snowpack, in contrast to the northern Rockies and California's Sierra Nevadas. In snowpack observations and reanalysis of the atmospheric circulation, we find similar relationships between frequency of AR events and winter season snowpack in the western United States. In spring, however, there is not a clear relationship between number of AR days and seasonal mean snowpack across the model ensemble, so caution is urged in interpreting the historical record in the spring season. Finally, the representation of the El Nino Southern Oscillation (ENSO)--an important source of interannual climate predictability in some regions--is explored in a large single-model ensemble using ensemble Empirical Orthogonal Functions (EOFs) to find modes of variance across the entire ensemble at once. The leading EOF is ENSO. The principal components (PCs) of the next three EOFs exhibit a lead-lag relationship with the ENSO signal captured in the first PC. The second PC, with most of its variance in the summer season, is the most strongly cross-correlated with the first. This approach offers insight into how the model considered represents this important atmosphere-ocean interaction. Taken together these varied approaches quantify the implications of climate projections regionally, identify processes that make snowpack water resources vulnerable, and seek insight into how to better simulate the large-scale climate modes controlling regional variability.

  17. Uncertainties in climate change projections for viticulture in Portugal

    NASA Astrophysics Data System (ADS)

    Fraga, Helder; Malheiro, Aureliano C.; Moutinho-Pereira, José; Pinto, Joaquim G.; Santos, João A.

    2013-04-01

    The assessment of climate change impacts on viticulture is often carried out using regional climate model (RCM) outputs. These studies rely on either multi-model ensembles or on single-model approaches. The RCM-ensembles account for uncertainties inherent to the different models. In this study, using a 16-RCM ensemble under the IPCC A1B scenario, the climate change signal (future minus recent-past, 2041-2070 - 1961-2000) of 4 bioclimatic indices (Huglin Index - HI, Dryness Index - DI, Hydrothermal Index - HyI and CompI - Composite Index) over mainland Portugal is analysed. A normalized interquartile range (NIQR) of the 16-member ensemble for each bioclimatic index is assessed in order to quantify the ensemble uncertainty. The results show significant increases in the HI index over most of Portugal, with higher values in Alentejo, Trás-os-Montes and Douro/Porto wine regions, also depicting very low uncertainty. Conversely, the decreases in the DI pattern throughout the country show large uncertainties, except in Minho (northwestern Portugal), where precipitation reaches the highest amounts in Portugal. The HyI shows significant decreases in northwestern Portugal, with relatively low uncertainty all across the country. The CompI depicts significant decreases over Alentejo and increases over Minho, though decreases over Alentejo reveal high uncertainty, while increases over Minho show low uncertainty. The assessment of the uncertainty in climate change projections is of great relevance for the wine industry. Quantifying this uncertainty is crucial, since different models may lead to quite different outcomes and may thereby be as crucial as climate change itself to the winemaking sector. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692.

  18. Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Caers, Jef

    2006-06-01

    In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.

  19. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  20. The impact of forest structure and spatial scale on the relationship between ground plot above ground biomass and GEDI lidar waveforms

    NASA Astrophysics Data System (ADS)

    Armston, J.; Marselis, S.; Hancock, S.; Duncanson, L.; Tang, H.; Kellner, J. R.; Calders, K.; Disney, M.; Dubayah, R.

    2017-12-01

    The NASA Global Ecosystem Dynamics Investigation (GEDI) will place a multi-beam waveform lidar instrument on the International Space Station (ISS) to provide measurements of forest vertical structure globally. These measurements of structure will underpin empirical modelling of above ground biomass density (AGBD) at the scale of individual GEDI lidar footprints (25m diameter). The GEDI pre-launch calibration strategy for footprint level models relies on linking AGBD estimates from ground plots with GEDI lidar waveforms simulated from coincident discrete return airborne laser scanning data. Currently available ground plot data have variable and often large uncertainty at the spatial resolution of GEDI footprints due to poor colocation, allometric model error, sample size and plot edge effects. The relative importance of these sources of uncertainty partly depends on the quality of ground measurements and region. It is usually difficult to know the magnitude of these uncertainties a priori so a common approach to mitigate their influence on model training is to aggregate ground plot and waveform lidar data to a coarser spatial scale (0.25-1ha). Here we examine the impacts of these principal sources of uncertainty using a 3D simulation approach. Sets of realistic tree models generated from terrestrial laser scanning (TLS) data or parametric modelling matched to tree inventory data were assembled from four contrasting forest plots across tropical rainforest, deciduous temperate forest, and sclerophyll eucalypt woodland sites. These tree models were used to simulate geometrically explicit 3D scenes with variable tree density, size class and spatial distribution. GEDI lidar waveforms are simulated over ground plots within these scenes using monte carlo ray tracing, allowing the impact of varying ground plot and waveform colocation error, forest structure and edge effects on the relationship between ground plot AGBD and GEDI lidar waveforms to be directly assessed. We quantify the sensitivity of calibration equations relating GEDI lidar structure measurements and AGBD to these factors at a range of spatial scales (0.0625-1ha) and discuss the implications for the expanding use of existing in situ ground plot data by GEDI.

  1. Transitioning from MODIS to VIIRS: an analysis of inter-consistency of NDVI data sets for agricultural monitoring.

    PubMed

    Skakun, Sergii; Justice, Christopher O; Vermote, Eric; Roger, Jean-Claude

    2018-01-01

    The Visible/Infrared Imager/Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite was launched in 2011, in part to provide continuity with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard National Aeronautics and Space Administration's (NASA) Terra and Aqua remote sensing satellites. The VIIRS will eventually replace MODIS for both land science and applications and add to the coarse-resolution, long term data record. It is, therefore, important to provide the user community with an assessment of the consistency of equivalent products from the two sensors. For this study, we do this in the context of example agricultural monitoring applications. Surface reflectance that is routinely delivered within the M{O,Y}D09 and VNP09 series of products provide critical input for generating downstream products. Given the range of applications utilizing the normalized difference vegetation index (NDVI) generated from M{O,Y}D09 and VNP09 products and the inherent differences between MODIS and VIIRS sensors in calibration, spatial sampling, and spectral bands, the main objective of this study is to quantify uncertainties related the transitioning from using MODIS to VIIRS-based NDVI's. In particular, we compare NDVI's derived from two sets of Level 3 MYD09 and VNP09 products with various spatial-temporal characteristics, namely 8-day composites at 500 m spatial resolution and daily Climate Modelling Grid (CMG) images at 0.05° spatial resolution. Spectral adjustment of VIIRS I1 (red) and I2 (near infra-red - NIR) bands to match MODIS/Aqua b1 (red) and b2 (NIR) bands is performed to remove a bias between MODIS and VIIRS-based red, NIR, and NDVI estimates. Overall, red reflectance, NIR reflectance, NDVI uncertainties were 0.014, 0.029 and 0.056 respectively for the 500 m product and 0.013, 0.016 and 0.032 for the 0.05° product. The study shows that MODIS and VIIRS NDVI data can be used interchangeably for applications with an uncertainty of less than 0.02 to 0.05, depending on the scale of spatial aggregation, which is typically the uncertainty of the individual dataset.

  2. Modeling spatiotemporal dynamics of global wetlands: comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin

    2016-03-01

    Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.

  3. Quantifying acoustic doppler current profiler discharge uncertainty: A Monte Carlo based tool for moving-boat measurements

    USGS Publications Warehouse

    Mueller, David S.

    2017-01-01

    This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.

  4. Pricing index-based catastrophe bonds: Part 2: Object-oriented design issues and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the second installment in a two-part series, and focuses on object-oriented programming methods to implement an augmented-state variable approach to aggregate the PCS index and introduce the Bermudan-style call feature into the proposed CAT bond model. The PCS index is aggregated quarterly using a discrete Asian running-sum formulation. The resulting aggregate PCS index augmented-state variable is used to specify the payoff (principle) on the CAT bond based on reinsurance layers. The purpose of the Bermudan-style call option is to allow the reinsurer to minimize their interest rate risk exposure on making fixed coupon payments under prevailing interest rates. A sensitivity analysis is performed to determine the impact of uncertainty in the frequency and magnitude of hurricanes on the price of the CAT bond. Results indicate that while the CAT bond is highly sensitive to the natural variability in the frequency of landfalling hurricanes between El Ninõ and non-El Ninõ years, it remains relatively insensitive to uncertainty in the magnitude of damages. In addition, results indicate that the maximum price of the CAT bond is insensitive to whether it is engineered to cover low frequency high magnitude events in a 'high' reinsurance layer relative to high frequency low magnitude events in a 'low' reinsurance layer. Also, while it is possible for the reinsurer to minimize their interest rate risk exposure on the fixed coupon payments, the impact of this risk on the price of the CAT bond appears small relative to the natural variability in the CAT bond price, and consequently catastrophic risk, due to uncertainty in the frequency and magnitude of landfalling hurricanes.

  5. Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.

    PubMed

    Zhao, Yuchao; Frey, H Christopher

    2004-11-01

    Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.

  6. Quantifying the uncertainty in site amplification modeling and its effects on site-specific seismic-hazard estimation in the upper Mississippi embayment and adjacent areas

    USGS Publications Warehouse

    Cramer, C.H.

    2006-01-01

    The Mississippi embayment, located in the central United States, and its thick deposits of sediments (over 1 km in places) have a large effect on earthquake ground motions. Several previous studies have addressed how these thick sediments might modify probabilistic seismic-hazard maps. The high seismic hazard associated with the New Madrid seismic zone makes it particularly important to quantify the uncertainty in modeling site amplification to better represent earthquake hazard in seismic-hazard maps. The methodology of the Memphis urban seismic-hazard-mapping project (Cramer et al., 2004) is combined with the reference profile approach of Toro and Silva (2001) to better estimate seismic hazard in the Mississippi embayment. Improvements over previous approaches include using the 2002 national seismic-hazard model, fully probabilistic hazard calculations, calibration of site amplification with improved nonlinear soil-response estimates, and estimates of uncertainty. Comparisons are made with the results of several previous studies, and estimates of uncertainty inherent in site-amplification modeling for the upper Mississippi embayment are developed. I present new seismic-hazard maps for the upper Mississippi embayment with the effects of site geology incorporating these uncertainties.

  7. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk.

    PubMed

    MacLeod, D A; Morse, A P

    2014-12-02

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  8. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk

    NASA Astrophysics Data System (ADS)

    MacLeod, D. A.; Morse, A. P.

    2014-12-01

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  9. Resolving dust emission responses to land cover change using an ecological land classification

    USDA-ARS?s Scientific Manuscript database

    Despite efforts to quantify the impacts of land cover change on wind erosion, assessment uncertainty remains large. We address this uncertainty by evaluating the application of ecological site concepts and state-and-transition models (STMs) for detecting and quantitatively describing the impacts of ...

  10. Impact of observation error structure on satellite soil moisture assimilation into a rainfall-runoff model

    USDA-ARS?s Scientific Manuscript database

    In Ensemble Kalman Filter (EnKF)-based data assimilation, the background prediction of a model is updated using observations and relative weights based on the model prediction and observation uncertainties. In practice, both model and observation uncertainties are difficult to quantify and they have...

  11. Nanostructured tracers for laser-based diagnostics in high-speed flows

    NASA Astrophysics Data System (ADS)

    Ghaemi, S.; Schmidt-Ott, A.; Scarano, F.

    2010-10-01

    The potential application of aggregates of nanoparticles for high-speed flow diagnostics is investigated. Aluminum nanoparticles around 10 nm in diameter are produced by spark discharge in argon gas. Through rapid coagulation and oxidation, aggregates of small effective density are formed. They are characterized by microscopy and their aerodynamics and optical properties are theoretically evaluated. The performance of the aggregates is experimentally investigated across an oblique shock wave in a supersonic wind tunnel of 3 × 3 cm2 cross-section at Mach 2. Particle image velocimetry is used to quantify the time response of the aggregates. The investigations are also carried out on compact titanium agglomerates to provide a base for comparison. The results yield a relaxation time of 0.27 µs for the nanostructured aluminum aggregates, which is an order of magnitude reduction with respect to the compact titanium nanoparticles. This work demonstrates the applicability of nanostructured aggregates for laser-based diagnostics in supersonic and hypersonic flows.

  12. Quantifying uncertainties in precipitation measurement

    NASA Astrophysics Data System (ADS)

    Chen, H. Z. D.

    2017-12-01

    The scientific community have a long history of utilizing precipitation data for climate model design. However, precipitation record and its model contains more uncertainty than its temperature counterpart. Literature research have shown precipitation measurements to be highly influenced by its surrounding environment, and weather stations are traditionally situated in open areas and subject to various limitations. As a result, this restriction limits the ability of the scientific community to fully close the loop on the water cycle. Horizontal redistribution have been shown to be a major factor influencing precipitation measurements. Efforts have been placed on reducing its effect on the monitoring apparatus. However, the amount of factors contributing to this uncertainty is numerous and difficult to fully capture. As a result, noise factor remains high in precipitation data. This study aims to quantify all uncertainties in precipitation data by factoring out horizontal redistribution by measuring them directly. Horizontal contribution of precipitation will be quantified by measuring precipitation at different heights, with one directly shadowing the other. The above collection represents traditional precipitation data, whereas the bottom measurements sums up the overall error term at given location. Measurements will be recorded and correlated with nearest available wind measurements to quantify its impact on traditional precipitation record. Collections at different locations will also be compared to see whether this phenomenon is location specific or if a general trend can be derived. We aim to demonstrate a new way to isolate the noise component in traditional precipitation data via empirical measurements. By doing so, improve the overall quality of historic precipitation record. As a result, provide a more accurate information for the design and calibration of large scale climate modeling.

  13. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  14. 78 FR 23497 - Propiconazole; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ...). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS.... Aggregate Risk Assessment and Determination of Safety Section 408(b)(2)(A)(i) of FFDCA allows EPA to... dose at which adverse effects of concern are identified (the LOAEL). Uncertainty/safety factors are...

  15. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty

    EPA Science Inventory

    Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids ...

  16. (Non-) robustness of vulnerability assessments to climate change: An application to New Zealand.

    PubMed

    Fernandez, Mario Andres; Bucaram, Santiago; Renteria, Willington

    2017-12-01

    Assessments of vulnerability to climate change are a key element to inform climate policy and research. Assessments based on the aggregation of indicators have a strong appeal for their simplicity but are at risk of over-simplification and uncertainty. This paper explores the non-robustness of indicators-based assessments to changes in assumptions on the degree of substitution or compensation between indicators. Our case study is a nationwide assessment for New Zealand. We found that the ranking of geographic areas is sensitive to different parameterisations of the aggregation function, that is, areas that are categorised as highly vulnerable may switch to the least vulnerable category even with respect to the same climate hazards and population groups. Policy implications from the assessments are then compromised. Though indicators-based approaches may help on identifying drivers of vulnerability, there are weak grounds to use them to recommend mitigation or adaptation decisions given the high level of uncertainty because of non-robustness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Therapeutic potential of choline magnesium trisalicylate as an alternative to aspirin for patients with bleeding tendencies.

    PubMed

    Danesh, B J; Saniabadi, A R; Russell, R I; Lowe, G D

    1987-12-01

    We have compared the effects of acetyl salicylic acid (ASA, aspirin) and choline magnesium trisalicylate (CMT), a non-acetylated salicylate product, on platelet aggregation in human whole blood ex-vivo. Using a whole blood platelet counter, platelet aggregation was quantified by measuring the fall in the number of single platelets at peak aggregation in response to collagen, arachidonic acid (AA), as well as spontaneous aggregation. In double blind and random order, 12 healthy volunteers received, on two separate occasions 10 days apart, a single oral dose of 652 mg ASA or 655 mg CMT. Despite a comparable absorption of salicylic acid from the two drugs, ingestion of ASA resulted in a marked inhibition of platelet aggregation induced by collagen (p less than 0.005), AA (p less than 0.01) and spontaneous aggregation (p less than 0.01), whereas such effects were not observed after CMT ingestion. We suggest that CMT may have therapeutic potential as an alternative to aspirin when inhibition of platelet aggregation can induce bleeding complications.

  18. Quantify ash aggregation associated to the 26 April 1979 Saint Vincent de la Soufrière eruption

    NASA Astrophysics Data System (ADS)

    Poret, Matthieu; Costa, Antonio; Folch, Arnau

    2016-04-01

    The 26 April 1979 an eruption occurred at Saint Vincent de la Soufrière volcano, West Indies, generating an extended tephra fallout deposit from the slope of the volcano toward the South of the island. This event was observed and studied by Brazier et al. (1982). This study provided a few tens of field observations that allowed an estimation of the tephra loading map and other observations on volcanological parameters such as eruptive column height, duration and erupted volume. They also provided information related to aggregation that was significant during the eruption. Here, the field observations and the meteorological fields are used in order to reconstruct the tephra dispersal by using the Fall3D model. The main goal is to better quantify the total mass of fine ash that aggregated during the eruption providing important information and constraints on aggregation processes. The preliminary results show that field observations are well captured using the simplified aggregation parameterization proposed by Cornell et al. (1983) whereas accretionary lapilli can be described adding a second aggregate class (with a diameter of 2 mm, a density of 2000 kg/m3 and a sphericity of 1) representing only a few percentage of the total amount of tephra. Such percentage was estimated by an empirical approach best fitting field observation. The simulation that best fit the field observations gives an estimation of the column height of about 12.5 km above the vent, a mass eruption rate of 6.0d+6 kg/s and a total mass of 2.2d+9 kg erupted. To go further we will use these results within the 1-D cross-section averaged eruption column model named FPLUME-1.0 based on the Buoyant Plume Theory (BPT) that considers aggregation processes within the plume.

  19. Quantification and propagation of disciplinary uncertainty via Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Mantis, George Constantine

    2002-08-01

    Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single-stage-to-orbit (SSTO) reusable launch vehicle concept, developed by the NASA Langley Research Center under the Space Launch Initiative, provides the validation case for this work, with the focus placed on economics, aerothermodynamics, propulsion, and structures metrics. (Abstract shortened by UMI.)

  20. Predicting Ice Sheet and Climate Evolution at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heimbach, Patrick

    2016-02-06

    A main research objectives of PISCEES is the development of formal methods for quantifying uncertainties in ice sheet modeling. Uncertainties in simulating and projecting mass loss from the polar ice sheets arise primarily from initial conditions, surface and basal boundary conditions, and model parameters. In general terms, two main chains of uncertainty propagation may be identified: 1. inverse propagation of observation and/or prior onto posterior control variable uncertainties; 2. forward propagation of prior or posterior control variable uncertainties onto those of target output quantities of interest (e.g., climate indices or ice sheet mass loss). A related goal is the developmentmore » of computationally efficient methods for producing initial conditions for an ice sheet that are close to available present-day observations and essentially free of artificial model drift, which is required in order to be useful for model projections (“initialization problem”). To be of maximum value, such optimal initial states should be accompanied by “useful” uncertainty estimates that account for the different sources of uncerainties, as well as the degree to which the optimum state is constrained by available observations. The PISCEES proposal outlined two approaches for quantifying uncertainties. The first targets the full exploration of the uncertainty in model projections with sampling-based methods and a workflow managed by DAKOTA (the main delivery vehicle for software developed under QUEST). This is feasible for low-dimensional problems, e.g., those with a handful of global parameters to be inferred. This approach can benefit from derivative/adjoint information, but it is not necessary, which is why it often referred to as “non-intrusive”. The second approach makes heavy use of derivative information from model adjoints to address quantifying uncertainty in high-dimensions (e.g., basal boundary conditions in ice sheet models). The use of local gradient, or Hessian information (i.e., second derivatives of the cost function), requires additional code development and implementation, and is thus often referred to as an “intrusive” approach. Within PISCEES, MIT has been tasked to develop methods for derivative-based UQ, the ”intrusive” approach discussed above. These methods rely on the availability of first (adjoint) and second (Hessian) derivative code, developed through intrusive methods such as algorithmic differentiation (AD). While representing a significant burden in terms of code development, derivative-baesd UQ is able to cope with very high-dimensional uncertainty spaces. That is, unlike sampling methods (all variations of Monte Carlo), calculational burden is independent of the dimension of the uncertainty space. This is a significant advantage for spatially distributed uncertainty fields, such as threedimensional initial conditions, three-dimensional parameter fields, or two-dimensional surface and basal boundary conditions. Importantly, uncertainty fields for ice sheet models generally fall into this category.« less

  1. Global statistics of microphysical properties of cloud-top ice crystals

    NASA Astrophysics Data System (ADS)

    van Diedenhoven, B.; Fridlind, A. M.; Cairns, B.; Ackerman, A. S.; Riedi, J.

    2017-12-01

    Ice crystals in clouds are highly complex. Their sizes, macroscale shape (i.e., habit), mesoscale shape (i.e., aspect ratio of components) and microscale shape (i.e., surface roughness) determine optical properties and affect physical properties such as fall speeds, growth rates and aggregation efficiency. Our current understanding on the formation and evolution of ice crystals under various conditions can be considered poor. Commonly, ice crystal size and shape are related to ambient temperature and humidity, but global observational statistics on the variation of ice crystal size and particularly shape have not been available. Here we show results of a project aiming to infer ice crystal size, shape and scattering properties from a combination of MODIS measurements and POLDER-PARASOL multi-angle polarimetry. The shape retrieval procedure infers the mean aspect ratios of components of ice crystals and the mean microscale surface roughness levels, which are quantifiable parameters that mostly affect the scattering properties, in contrast to "habit". We present global statistics on the variation of ice effective radius, component aspect ratio, microscale surface roughness and scattering asymmetry parameter as a function of cloud top temperature, latitude, location, cloud type, season, etc. Generally, with increasing height, sizes decrease, roughness increases, asymmetry parameters decrease and aspect ratios increase towards unity. Some systematic differences are observed for clouds warmer and colder than the homogeneous freezing level. Uncertainties in the retrievals will be discussed. These statistics can be used as observational targets for modeling efforts and to better constrain other satellite remote sensing applications and their uncertainties.

  2. Medical Geography: a Promising Field of Application for Geostatistics.

    PubMed

    Goovaerts, P

    2009-01-01

    The analysis of health data and putative covariates, such as environmental, socio-economic, behavioral or demographic factors, is a promising application for geostatistics. It presents, however, several methodological challenges that arise from the fact that data are typically aggregated over irregular spatial supports and consist of a numerator and a denominator (i.e. population size). This paper presents an overview of recent developments in the field of health geostatistics, with an emphasis on three main steps in the analysis of areal health data: estimation of the underlying disease risk, detection of areas with significantly higher risk, and analysis of relationships with putative risk factors. The analysis is illustrated using age-adjusted cervix cancer mortality rates recorded over the 1970-1994 period for 118 counties of four states in the Western USA. Poisson kriging allows the filtering of noisy mortality rates computed from small population sizes, enhancing the correlation with two putative explanatory variables: percentage of habitants living below the federally defined poverty line, and percentage of Hispanic females. Area-to-point kriging formulation creates continuous maps of mortality risk, reducing the visual bias associated with the interpretation of choropleth maps. Stochastic simulation is used to generate realizations of cancer mortality maps, which allows one to quantify numerically how the uncertainty about the spatial distribution of health outcomes translates into uncertainty about the location of clusters of high values or the correlation with covariates. Last, geographically-weighted regression highlights the non-stationarity in the explanatory power of covariates: the higher mortality values along the coast are better explained by the two covariates than the lower risk recorded in Utah.

  3. Global Statistics of Microphysical Properties of Cloud-Top Ice Crystals

    NASA Technical Reports Server (NTRS)

    Van Diedenhoven, Bastiaan; Fridlind, Ann; Cairns, Brian; Ackerman, Andrew; Riedl, Jerome

    2017-01-01

    Ice crystals in clouds are highly complex. Their sizes, macroscale shape (i.e., habit), mesoscale shape (i.e., aspect ratio of components) and microscale shape (i.e., surface roughness) determine optical properties and affect physical properties such as fall speeds, growth rates and aggregation efficiency. Our current understanding on the formation and evolution of ice crystals under various conditions can be considered poor. Commonly, ice crystal size and shape are related to ambient temperature and humidity, but global observational statistics on the variation of ice crystal size and particularly shape have not been available. Here we show results of a project aiming to infer ice crystal size, shape and scattering properties from a combination of MODIS measurements and POLDER-PARASOL multi-angle polarimetry. The shape retrieval procedure infers the mean aspect ratios of components of ice crystals and the mean microscale surface roughness levels, which are quantifiable parameters that mostly affect the scattering properties, in contrast to a habit. We present global statistics on the variation of ice effective radius, component aspect ratio, microscale surface roughness and scattering asymmetry parameter as a function of cloud top temperature, latitude, location, cloud type, season, etc. Generally, with increasing height, sizes decrease, roughness increases, asymmetry parameters decrease and aspect ratios increase towards unity. Some systematic differences are observed for clouds warmer and colder than the homogeneous freezing level. Uncertainties in the retrievals will be discussed. These statistics can be used as observational targets for modeling efforts and to better constrain other satellite remote sensing applications and their uncertainties.

  4. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  5. A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Chen, X.; Zhao, C.

    2014-12-01

    The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits.more » Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.« less

  7. Emotion and decision-making under uncertainty: Physiological arousal predicts increased gambling during ambiguity but not risk.

    PubMed

    FeldmanHall, Oriel; Glimcher, Paul; Baker, Augustus L; Phelps, Elizabeth A

    2016-10-01

    Uncertainty, which is ubiquitous in decision-making, can be fractionated into known probabilities (risk) and unknown probabilities (ambiguity). Although research has illustrated that individuals more often avoid decisions associated with ambiguity compared to risk, it remains unclear why ambiguity is perceived as more aversive. Here we examine the role of arousal in shaping the representation of value and subsequent choice under risky and ambiguous decisions. To investigate the relationship between arousal and decisions of uncertainty, we measure skin conductance response-a quantifiable measure reflecting sympathetic nervous system arousal-during choices to gamble under risk and ambiguity. To quantify the discrete influences of risk and ambiguity sensitivity and the subjective value of each option under consideration, we model fluctuating uncertainty, as well as the amount of money that can be gained by taking the gamble. Results reveal that although arousal tracks the subjective value of a lottery regardless of uncertainty type, arousal differentially contributes to the computation of value-that is, choice-depending on whether the uncertainty is risky or ambiguous: Enhanced arousal adaptively decreases risk-taking only when the lottery is highly risky but increases risk-taking when the probability of winning is ambiguous (even after controlling for subjective value). Together, this suggests that the role of arousal during decisions of uncertainty is modulatory and highly dependent on the context in which the decision is framed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. On the radiative properties of soot aggregates part 1: Necking and overlapping

    NASA Astrophysics Data System (ADS)

    Yon, J.; Bescond, A.; Liu, F.

    2015-09-01

    There is a strong interest in accurately modelling the radiative properties of soot aggregates (also known as black carbon particles) emitted from combustion systems and fires to gain improved understanding of the role of black carbon to global warming. This study conducted a systematic investigation of the effects of overlapping and necking between neighbouring primary particles on the radiative properties of soot aggregates using the discrete dipole approximation. The degrees of overlapping and necking are quantified by the overlapping and necking parameters. Realistic soot aggregates were generated numerically by constructing overlapping and necking to fractal aggregates formed by point-touch primary particles simulated using a diffusion-limited cluster aggregation algorithm. Radiative properties (differential scattering, absorption, total scattering, specific extinction, asymmetry factor and single scattering albedo) were calculated using the experimentally measured soot refractive index over the spectral range of 266-1064 nm for 9 combinations of the overlapping and necking parameters. Overlapping and necking affect significantly the absorption and scattering properties of soot aggregates, especially in the near UV spectrum due to the enhanced multiple scattering effects within an aggregate. By using correctly modified aggregate properties (fractal dimension, prefactor, primary particle radius, and the number of primary particle) and by accounting for the effects of multiple scattering, the simple Rayleigh-Debye-Gans theory for fractal aggregates can reproduce reasonably accurate radiative properties of realistic soot aggregates.

  9. Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Gregory M; Key, Brian P; Zerkle, David K

    2009-01-01

    The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which canmore » be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.« less

  10. Developing a job-exposure matrix with exposure uncertainty from expert elicitation and data modeling.

    PubMed

    Fischer, Heidi J; Vergara, Ximena P; Yost, Michael; Silva, Michael; Lombardi, David A; Kheifets, Leeka

    2017-01-01

    Job exposure matrices (JEMs) are tools used to classify exposures for job titles based on general job tasks in the absence of individual level data. However, exposure uncertainty due to variations in worker practices, job conditions, and the quality of data has never been quantified systematically in a JEM. We describe a methodology for creating a JEM which defines occupational exposures on a continuous scale and utilizes elicitation methods to quantify exposure uncertainty by assigning exposures probability distributions with parameters determined through expert involvement. Experts use their knowledge to develop mathematical models using related exposure surrogate data in the absence of available occupational level data and to adjust model output against other similar occupations. Formal expert elicitation methods provided a consistent, efficient process to incorporate expert judgment into a large, consensus-based JEM. A population-based electric shock JEM was created using these methods, allowing for transparent estimates of exposure.

  11. Uncertainty Quantification using Epi-Splines and Soft Information

    DTIC Science & Technology

    2012-06-01

    use of the Kullback - Leibler divergence measure. The Kullback - Leibler ...to illustrate the application of soft information related to the Kullback - Leibler (KL) divergence discussed in Chapter 2. The idea behind apply- ing... information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of

  12. Crop model improvement reduces the uncertainty of the response to temperature of multi-model ensembles

    USDA-ARS?s Scientific Manuscript database

    To improve climate change impact estimates, multi-model ensembles (MMEs) have been suggested. MMEs enable quantifying model uncertainty, and their medians are more accurate than that of any single model when compared with observations. However, multi-model ensembles are costly to execute, so model i...

  13. Estimating uncertainties in watershed studies

    Treesearch

    John Campbell; Ruth Yanai; Mark Green

    2011-01-01

    Small watersheds have been used widely to quantify chemical fluxes and cycling in terrestrial ecosystems for about the past half century. The small watershed approach has been valuable in characterizing hydrologic and nutrient budgets, for instance, in estimating the net gain or loss of solutes in response to disturbance. However, the uncertainty in these ecosystem...

  14. Uncertainty in measuring runoff from small watersheds using instrumented outlet-pond

    USDA-ARS?s Scientific Manuscript database

    This study quantified the uncertainty associated with event runoff quantity monitored at watershed outlet ponds. Inflow and outflow depth data were collected from 2004 to 2011 at seven instrumented monitoring stations at the outlet of watersheds ranging in size from 35.2 to 159.5 ha on the USDA-ARS ...

  15. Gaussian intrinsic entanglement for states with partial minimum uncertainty

    NASA Astrophysics Data System (ADS)

    Mišta, Ladislav; Baksová, Klára

    2018-01-01

    We develop a recently proposed theory of a quantifier of bipartite Gaussian entanglement called Gaussian intrinsic entanglement (GIE) [L. Mišta, Jr. and R. Tatham, Phys. Rev. Lett. 117, 240505 (2016), 10.1103/PhysRevLett.117.240505]. Gaussian intrinsic entanglement provides a compromise between computable and physically meaningful entanglement quantifiers and so far it has been calculated for two-mode Gaussian states including all symmetric partial minimum-uncertainty states, weakly mixed asymmetric squeezed thermal states with partial minimum uncertainty, and weakly mixed symmetric squeezed thermal states. We improve the method of derivation of GIE and show that all previously derived formulas for GIE of weakly mixed states in fact hold for states with higher mixedness. In addition, we derive analytical formulas for GIE for several other classes of two-mode Gaussian states with partial minimum uncertainty. Finally, we show that, like for all previously known states, also for all currently considered states the GIE is equal to Gaussian Rényi-2 entanglement of formation. This finding strengthens a conjecture about the equivalence of GIE and Gaussian Rényi-2 entanglement of formation for all bipartite Gaussian states.

  16. A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling

    NASA Astrophysics Data System (ADS)

    Chen, L.; Gong, Y.; Shen, Z.

    2015-11-01

    Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.

  17. Factoring uncertainty into restoration modeling of in-situ leach uranium mines

    USGS Publications Warehouse

    Johnson, Raymond H.; Friedel, Michael J.

    2009-01-01

    Postmining restoration is one of the greatest concerns for uranium in-situ leach (ISL) mining operations. The ISL-affected aquifer needs to be returned to conditions specified in the mining permit (either premining or other specified conditions). When uranium ISL operations are completed, postmining restoration is usually achieved by injecting reducing agents into the mined zone. The objective of this process is to restore the aquifer to premining conditions by reducing the solubility of uranium and other metals in the ground water. Reactive transport modeling is a potentially useful method for simulating the effectiveness of proposed restoration techniques. While reactive transport models can be useful, they are a simplification of reality that introduces uncertainty through the model conceptualization, parameterization, and calibration processes. For this reason, quantifying the uncertainty in simulated temporal and spatial hydrogeochemistry is important for postremedial risk evaluation of metal concentrations and mobility. Quantifying the range of uncertainty in key predictions (such as uranium concentrations at a specific location) can be achieved using forward Monte Carlo or other inverse modeling techniques (trial-and-error parameter sensitivity, calibration constrained Monte Carlo). These techniques provide simulated values of metal concentrations at specified locations that can be presented as nonlinear uncertainty limits or probability density functions. Decisionmakers can use these results to better evaluate environmental risk as future metal concentrations with a limited range of possibilities, based on a scientific evaluation of uncertainty.

  18. Quantifying Uncertainty in Projections of Stratospheric Ozone Over the 21st Century

    NASA Technical Reports Server (NTRS)

    Charlton-Perez, A. J.; Hawkins, E.; Eyring, V.; Cionni, I.; Bodeker, G. E.; Kinnison, D. E.; Akiyoshi, H.; Frith, S. M.; Garcia, R.; Gettelman, A.; hide

    2010-01-01

    Future stratospheric ozone concentrations will be determined both by changes in the concentration of ozone depleting substances (ODSs) and by changes in stratospheric and tropospheric climate, including those caused by changes in anthropogenic greenhouse gases (GHGs). Since future economic development pathways and resultant emissions of GHGs are uncertain, anthropogenic climate change could be a significant source of uncertainty for future projections of stratospheric ozone. In this pilot study, using an ensemble of opportunity of chemistry-climate model (CCM) simulations, the contribution of scenario uncertainty from different plausible emissions pathways for 10 ODSs and GHGs to future ozone projections is quantified relative to the contribution from model uncertainty and internal variability of the chemistry-climate system. For both the global, annual mean ozone concentration and for ozone in specific geographical regions, differences between CCMs are the dominant source of uncertainty for the first two-thirds of the 21 st century, up-to and after the time when ozone concentrations 15 return to 1980 values. In the last third of the 21st century, dependent upon the set of greenhouse gas scenarios used, scenario uncertainty can be the dominant contributor. This result suggests that investment in chemistry-climate modelling is likely to continue to refine projections of stratospheric ozone and estimates of the return of stratospheric ozone concentrations to pre-1980 levels.

  19. Quantifying Uncertainties in Mass-Dimensional Relationships Through a Comparison Between CloudSat and SPartICus Reflectivity Factors

    NASA Astrophysics Data System (ADS)

    Mascio, J.; Mace, G. G.

    2015-12-01

    CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.

  20. Quantifying the value of redundant measurements at GCOS Reference Upper-Air Network sites

    DOE PAGES

    Madonna, F.; Rosoldi, M.; Güldner, J.; ...

    2014-11-19

    The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of the integrated water vapour (IWV) and water vapour mixing ratio profiles provided by five highly instrumented GRUAN (GCOS, Global Climate Observing System, Reference Upper-Air Network) stations in 2010–2012. Results show that the random uncertainties on the IWV measured with radiosondes, global positioning system, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%.more » Comparisons of time series of IWV content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy with the IWV time series measured by radiosondes and therefore the highest potential to reduce the random uncertainty of the radiosondes time series. Moreover, the random uncertainty of a time series from one instrument can be reduced by ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty is achieved by conditioning Raman lidar measurements with microwave radiometer measurements. In conclusion, specific instruments are recommended for atmospheric water vapour measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.« less

  1. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  2. Uncertainty quantification and validation of 3D lattice scaffolds for computer-aided biomedical applications.

    PubMed

    Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J

    2017-07-01

    A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Soil Aggregate Stability and Grassland Productivity Associations in a Northern Mixed-Grass Prairie

    PubMed Central

    Reinhart, Kurt O.; Vermeire, Lance T.

    2016-01-01

    Soil aggregate stability data are often predicted to be positively associated with measures of plant productivity, rangeland health, and ecosystem functioning. Here we revisit the hypothesis that soil aggregate stability is positively associated with plant productivity. We measured local (plot-to-plot) variation in grassland community composition, plant (aboveground) biomass, root biomass, % water-stable soil aggregates, and topography. After accounting for spatial autocorrelation, we observed a negative association between % water-stable soil aggregates (0.25–1 and 1–2 mm size classes of macroaggregates) and dominant graminoid biomass, and negative associations between the % water-stable aggregates and the root biomass of a dominant sedge (Carex filifolia). However, variation in total root biomass (0–10 or 0–30 cm depths) was either negatively or not appreciably associated with soil aggregate stabilities. Overall, regression slope coefficients were consistently negative thereby indicating the general absence of a positive association between measures of plant productivity and soil aggregate stability for the study area. The predicted positive association between factors was likely confounded by variation in plant species composition. Specifically, sampling spanned a local gradient in plant community composition which was likely driven by niche partitioning along a subtle gradient in elevation. Our results suggest an apparent trade-off between some measures of plant biomass production and soil aggregate stability, both known to affect the land’s capacity to resist erosion. These findings further highlight the uncertainty of plant biomass-soil stability associations. PMID:27467598

  4. Soil Aggregate Stability and Grassland Productivity Associations in a Northern Mixed-Grass Prairie.

    PubMed

    Reinhart, Kurt O; Vermeire, Lance T

    2016-01-01

    Soil aggregate stability data are often predicted to be positively associated with measures of plant productivity, rangeland health, and ecosystem functioning. Here we revisit the hypothesis that soil aggregate stability is positively associated with plant productivity. We measured local (plot-to-plot) variation in grassland community composition, plant (aboveground) biomass, root biomass, % water-stable soil aggregates, and topography. After accounting for spatial autocorrelation, we observed a negative association between % water-stable soil aggregates (0.25-1 and 1-2 mm size classes of macroaggregates) and dominant graminoid biomass, and negative associations between the % water-stable aggregates and the root biomass of a dominant sedge (Carex filifolia). However, variation in total root biomass (0-10 or 0-30 cm depths) was either negatively or not appreciably associated with soil aggregate stabilities. Overall, regression slope coefficients were consistently negative thereby indicating the general absence of a positive association between measures of plant productivity and soil aggregate stability for the study area. The predicted positive association between factors was likely confounded by variation in plant species composition. Specifically, sampling spanned a local gradient in plant community composition which was likely driven by niche partitioning along a subtle gradient in elevation. Our results suggest an apparent trade-off between some measures of plant biomass production and soil aggregate stability, both known to affect the land's capacity to resist erosion. These findings further highlight the uncertainty of plant biomass-soil stability associations.

  5. Risk Management for Weapon Systems Acquisition: A Decision Support System

    DTIC Science & Technology

    1985-02-28

    includes the program evaluation and review technique (PERT) for network analysis, the PMRM for quantifying risk , an optimization package for generating...Despite the inclusion of uncertainty in time, PERT can at best be considered as a tool for quantifying risk with regard to the time element only. Moreover

  6. Covariance propagation in spectral indices

    DOE PAGES

    Griffin, P. J.

    2015-01-09

    In this study, the dosimetry community has a history of using spectral indices to support neutron spectrum characterization and cross section validation efforts. An important aspect to this type of analysis is the proper consideration of the contribution of the spectrum uncertainty to the total uncertainty in calculated spectral indices (SIs). This study identifies deficiencies in the traditional treatment of the SI uncertainty, provides simple bounds to the spectral component in the SI uncertainty estimates, verifies that these estimates are reflected in actual applications, details a methodology that rigorously captures the spectral contribution to the uncertainty in the SI, andmore » provides quantified examples that demonstrate the importance of the proper treatment the spectral contribution to the uncertainty in the SI.« less

  7. a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Houshyaripour, A. H.

    2017-09-01

    This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.

  8. Modular Exposure Disaggregation Methodologies for Catastrophe Modelling using GIS and Remotely-Sensed Data

    NASA Astrophysics Data System (ADS)

    Foulser-Piggott, R.; Saito, K.; Spence, R.

    2012-04-01

    Loss estimates produced by catastrophe models are dependent on the quality of the input data, including both the hazard and exposure data. Currently, some of the exposure data input into a catastrophe model is aggregated over an area and therefore an estimate of the risk in this area may have a low level of accuracy. In order to obtain a more detailed and accurate loss estimate, it is necessary to have higher resolution exposure data. However, high resolution exposure data is not commonly available worldwide and therefore methods to infer building distribution and characteristics at higher resolution from existing information must be developed. This study is focussed on the development of disaggregation methodologies for exposure data which, if implemented in current catastrophe models, would lead to improved loss estimates. The new methodologies developed for disaggregating exposure data make use of GIS, remote sensing and statistical techniques. The main focus of this study is on earthquake risk, however the methods developed are modular so that they may be applied to different hazards. A number of different methods are proposed in order to be applicable to different regions of the world which have different amounts of data available. The new methods give estimates of both the number of buildings in a study area and a distribution of building typologies, as well as a measure of the vulnerability of the building stock to hazard. For each method, a way to assess and quantify the uncertainties in the methods and results is proposed, with particular focus on developing an index to enable input data quality to be compared. The applicability of the methods is demonstrated through testing for two study areas, one in Japan and the second in Turkey, selected because of the occurrence of recent and damaging earthquake events. The testing procedure is to use the proposed methods to estimate the number of buildings damaged at different levels following a scenario earthquake event. This enables the results of the models to be compared with real data and the relative performance of the different methodologies to be evaluated. A sensitivity analysis is also conducted for two main reasons. Firstly, to determine the key input variables in the methodology that have the most significant impact on the resulting loss estimate. Secondly, to enable the uncertainty in the different approaches to be quantified and therefore provide a range of uncertainty in the loss estimates.

  9. Uncertainty in future projections of global and regional marine fisheries catches

    NASA Astrophysics Data System (ADS)

    Reygondeau, G.; Cheung, W. W. L.; Froelicher, T. L.; Stock, C. A.; Jones, M. C.; Sarmiento, J. L.

    2016-02-01

    Previous studies have projected the global redistribution of potential marine fisheries catches by mid-21st century under climate change, with increases in high latitude regions and pronounced decreases in tropical biomes. However, quantified confidence levels of such projections are not available, rendering it difficult to interpret the associated risk to society. This paper quantifies the confidence of changes in future fish production using a 30-member ensemble simulation of the Geophysical Fluid Dynamics Laboratory ESM2M (representing internal variability of oceanographic conditions), three structural variants of a mechanistic species distribution model (representing uncertainty in fisheries models and different greenhouse gas emission and fishing scenarios (representing scenario uncertainty). We project that total potential catches of 500 exploited fish and invertebrate stocks, that contribute most to regional fisheries catches and their variability, will likely decrease in the 21st century under a `business-as-usual' greenhouse gas emission scenario (RCP8.5). Fishing and it's management remains a main factor determining future fish stocks and their catches. Internal variability of projected ocean conditions, including temperature, oxygen level, pH, net primary production and sea ice contributes substantially to the uncertainty of potential catch projections. Regionally, climate-driven decreases in potential catches in tropical oceans and increases in the Arctic polar regions are projected with higher confidence than other regions, while the direction of changes in most mid-latitude (or temperate) regions is uncertain. Under a stringent greenhouse gas mitigation scenario (RCP 2.6), climate change impacts on potential catches may not emerge from their uncertainties. Overall, this study provides a foundation for quantifying risks of climate change impacts on marine fisheries globally and regionally, and how such risk may be altered by policy interventions.

  10. A global wind resource atlas including high-resolution terrain effects

    NASA Astrophysics Data System (ADS)

    Hahmann, Andrea; Badger, Jake; Olsen, Bjarke; Davis, Neil; Larsen, Xiaoli; Badger, Merete

    2015-04-01

    Currently no accurate global wind resource dataset is available to fill the needs of policy makers and strategic energy planners. Evaluating wind resources directly from coarse resolution reanalysis datasets underestimate the true wind energy resource, as the small-scale spatial variability of winds is missing. This missing variability can account for a large part of the local wind resource. Crucially, it is the windiest sites that suffer the largest wind resource errors: in simple terrain the windiest sites may be underestimated by 25%, in complex terrain the underestimate can be as large as 100%. The small-scale spatial variability of winds can be modelled using novel statistical methods and by application of established microscale models within WAsP developed at DTU Wind Energy. We present the framework for a single global methodology, which is relative fast and economical to complete. The method employs reanalysis datasets, which are downscaled to high-resolution wind resource datasets via a so-called generalization step, and microscale modelling using WAsP. This method will create the first global wind atlas (GWA) that covers all land areas (except Antarctica) and 30 km coastal zone over water. Verification of the GWA estimates will be done at carefully selected test regions, against verified estimates from mesoscale modelling and satellite synthetic aperture radar (SAR). This verification exercise will also help in the estimation of the uncertainty of the new wind climate dataset. Uncertainty will be assessed as a function of spatial aggregation. It is expected that the uncertainty at verification sites will be larger than that of dedicated assessments, but the uncertainty will be reduced at levels of aggregation appropriate for energy planning, and importantly much improved relative to what is used today. In this presentation we discuss the methodology used, which includes the generalization of wind climatologies, and the differences in local and spatially aggregated wind resources that result from using different reanalyses in the various verification regions. A prototype web interface for the public access to the data will also be showcased.

  11. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    PubMed

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011. Published by Elsevier Inc.

  12. An Approach to Forecasting Health Expenditures, with Application to the U.S. Medicare System

    PubMed Central

    Lee, Ronald; Miller, Timothy

    2002-01-01

    Objective To quantify uncertainty in forecasts of health expenditures. Study Design Stochastic time series models are estimated for historical variations in fertility, mortality, and health spending per capita in the United States, and used to generate stochastic simulations of the growth of Medicare expenditures. Individual health spending is modeled to depend on the number of years until death. Data Sources/Study Setting A simple accounting model is developed for forecasting health expenditures, using the U.S. Medicare system as an example. Principal Findings Medicare expenditures are projected to rise from 2.2 percent of GDP (gross domestic product) to about 8 percent of GDP by 2075. This increase is due in equal measure to increasing health spending per beneficiary and to population aging. The traditional projection method constructs high, medium, and low scenarios to assess uncertainty, an approach that has many problems. Using stochastic forecasting, we find a 95 percent probability that Medicare spending in 2075 will fall between 4 percent and 18 percent of GDP, indicating a wide band of uncertainty. Although there is substantial uncertainty about future mortality decline, it contributed little to uncertainty about future Medicare spending, since lower mortality both raises the number of elderly, tending to raise spending, and is associated with improved health of the elderly, tending to reduce spending. Uncertainty about fertility, by contrast, leads to great uncertainty about the future size of the labor force, and therefore adds importantly to uncertainty about the health-share of GDP. In the shorter term, the major source of uncertainty is health spending per capita. Conclusions History is a valuable guide for quantifying our uncertainty about future health expenditures. The probabilistic model we present has several advantages over the high–low scenario approach to forecasting. It indicates great uncertainty about future Medicare expenditures relative to GDP. PMID:12479501

  13. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  14. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  15. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  16. A method for acquiring random range uncertainty probability distributions in proton therapy

    NASA Astrophysics Data System (ADS)

    Holloway, S. M.; Holloway, M. D.; Thomas, S. J.

    2018-01-01

    In treatment planning we depend upon accurate knowledge of geometric and range uncertainties. If the uncertainty model is inaccurate then the plan will produce under-dosing of the target and/or overdosing of OAR. We aim to provide a method for which centre and site-specific population range uncertainty due to inter-fraction motion can be quantified to improve the uncertainty model in proton treatment planning. Daily volumetric MVCT data from previously treated radiotherapy patients has been used to investigate inter-fraction changes to water equivalent path-length (WEPL). Daily image-guidance scans were carried out for each patient and corrected for changes in CTV position (using rigid transformations). An effective depth algorithm was used to determine residual range changes, after corrections had been applied, throughout the treatment by comparing WEPL within the CTV at each fraction for several beam angles. As a proof of principle this method was used to quantify uncertainties for inter-fraction range changes for a sample of head and neck patients of Σ=3.39 mm, σ = 4.72 mm and overall mean = -1.82 mm. For prostate Σ=5.64 mm, σ = 5.91 mm and overall mean = 0.98 mm. The choice of beam angle for head and neck did not affect the inter-fraction range error significantly; however this was not the same for prostate. Greater range changes were seen using a lateral beam compared to an anterior beam for prostate due to relative motion of the prostate and femoral heads. A method has been developed to quantify population range changes due to inter-fraction motion that can be adapted for the clinic. The results of this work highlight the importance of robust planning and analysis in proton therapy. Such information could be used in robust optimisation algorithms or treatment plan robustness analysis. Such knowledge will aid in establishing beam start conditions at planning and for establishing adaptive planning protocols.

  17. A new framework for quantifying uncertainties in modelling studies for future climates - how more certain are CMIP5 precipitation and temperature simulations compared to CMIP3?

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.

    2014-12-01

    We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.

  18. Capturing the complexity of uncertainty language to maximise its use.

    NASA Astrophysics Data System (ADS)

    Juanchich, Marie; Sirota, Miroslav

    2016-04-01

    Uncertainty is often communicated verbally, using uncertainty phrases such as 'there is a small risk of earthquake', 'flooding is possible' or 'it is very likely the sea level will rise'. Prior research has only examined a limited number of properties of uncertainty phrases: mainly the probability conveyed (e.g., 'a small chance' convey a small probability whereas 'it is likely' convey a high probability). We propose a new analytical framework that captures more of the complexity of uncertainty phrases by studying their semantic, pragmatic and syntactic properties. Further, we argue that the complexity of uncertainty phrases is functional and can be leveraged to best describe uncertain outcomes and achieve the goals of speakers. We will present findings from a corpus study and an experiment where we assessed the following properties of uncertainty phrases: probability conveyed, subjectivity, valence, nature of the subject, grammatical category of the uncertainty quantifier and whether the quantifier elicits a positive or a negative framing. Natural language processing techniques applied to corpus data showed that people use a very large variety of uncertainty phrases representing different configurations of the properties of uncertainty phrases (e.g., phrases that convey different levels of subjectivity, phrases with different grammatical construction). In addition, the corpus analysis uncovered that uncertainty phrases commonly studied in psychology are not the most commonly used in real life. In the experiment we manipulated the amount of evidence indicating that a fact was true and whether the participant was required to prove the fact was true or that it was false. Participants produced a phrase to communicate the likelihood that the fact was true (e.g., 'it is not sure…', 'I am convinced that…'). The analyses of the uncertainty phrases produced showed that participants leveraged the properties of uncertainty phrases to reflect the strength of evidence but also to achieve their personal goals. For example, participants aiming to prove that the fact was true chose words that conveyed a more positive polarity and a higher probability than participants aiming to prove that the fact was false. We discuss the utility of the framework for harnessing the properties of uncertainty phrases in geosciences.

  19. An empirical test of the aggregation model of coexistence and consequences for competing container-dwelling mosquitoes

    PubMed Central

    Fader, Joseph E.; Juliano, Steven A.

    2014-01-01

    We investigated the aggregation model of coexistence as a potential mechanism explaining patterns of coexistence between container mosquitoes Aedes albopictus and Aedes aegypti in southern Florida. Aedes aegypti coexists with the invasive A. albopictus in many locations despite being an inferior resource competitor under most conditions. In agreement with aggregation theory we observed significant intraspecific aggregation of A. albopictus in all six field sites sampled in southern Florida in 2009. Quantitative results suggest that larval distributions of A. albopictus across containers are sufficiently aggregated to permit persistence of the inferior competitor A. aegypti. We tested whether observed levels of A. albopictus aggregation would significantly improve A. aegypti population performance in a controlled laboratory competition experiment manipulating A. albopictus aggregation while holding mean densities constant. We quantified A. aegypti’s estimated rate of population change for replicate, multi-container cohorts in response to increasing A. albopictus aggregation across the cohorts. Aedes albopictus aggregation treatments produced J statistics for aggregation that spanned the range observed in the field study. We demonstrate a positive linear relationship between intraspecific aggregation of the superior competitor A. albopictus and estimated rate of population change for cohorts of the inferior A. aegypti. Thus, aggregation of A. albopictus at levels comparable to those observed in nature appears to be sufficient to reduce significantly the competitive impact of A. albopictus on multi-container cohorts of A. aegypti, and may therefore contribute to local coexistence of these competitors. PMID:23691666

  20. An empirical test of the aggregation model of coexistence and consequences for competing container-dwelling mosquitoes.

    PubMed

    Fader, Joseph E; Juliano, Steven A

    2013-02-01

    We investigated the aggregation model of coexistence as a potential mechanism explaining patterns of coexistence between container mosquitoes Aedes albopictus and Aedes aegypti in southern Florida, USA. Aedes aegypti coexists with the invasive A. albopictus in many locations despite being an inferior resource competitor under most conditions. In agreement with aggregation theory we observed significant intraspecific aggregation of A. albopictus in all six field sites sampled in southern Florida in 2009. Quantitative results suggest that larval distributions of A. albopictus across containers are sufficiently aggregated to permit persistence of the inferior competitor A. aegypti. We tested whether observed levels of A. albopictus aggregation would significantly improve A. aegypti population performance in a controlled laboratory competition experiment manipulating A. albopictus aggregation while holding mean densities constant. We quantified A. aegypti's estimated rate of population change for replicate, multi-container cohorts in response to increasing A. albopictus aggregation across the cohorts. Aedes albopictus aggregation treatments produced J statistics for aggregation that spanned the range observed in the field study. We demonstrate a positive linear relationship between intraspecific aggregation of the superior competitor A. albopictus and estimated rate of population change for cohorts of the inferior A. aegypti. Thus, aggregation of A. albopictus at levels comparable to those observed in nature appears to be sufficient to reduce significantly the competitive impact of A. albopictus on multi-container cohorts of A. aegypti, and may therefore contribute to local coexistence of these competitors.

  1. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.

  2. Quantifying scaling effects on satellite-derived forest area estimates for the conterminous USA

    Treesearch

    Daolan Zheng; L.S. Heath; M.J. Ducey; J.E. Smith

    2009-01-01

    We quantified the scaling effects on forest area estimates for the conterminous USA using regression analysis and the National Land Cover Dataset 30m satellite-derived maps in 2001 and 1992. The original data were aggregated to: (1) broad cover types (forest vs. non-forest); and (2) coarser resolutions (1km and 10 km). Standard errors of the model estimates were 2.3%...

  3. The Role of Arsenic Speciation in Dietary Exposure Assessment and the Need to Include Bioaccessibility and Biotransformation

    EPA Science Inventory

    Chemical form specific exposure assessment for arsenic has long been identified as a source of uncertainty in estimating the risk associated with the aggregate exposure for a population. Some speciation based assessments document occurrence within an exposure route; however, the...

  4. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.

  5. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.

  6. On the spatial decorrelation of stochastic solar resource variability at long timescales

    DOE PAGES

    Perez, Marc J. R.; Fthenakis, Vasilis M.

    2015-05-16

    Understanding the spatial and temporal characteristics of solar resource variability is important because it helps inform the discussion surrounding the merits of geographic dispersion and subsequent electrical interconnection of photovoltaics as part of a portfolio of future solutions for coping with this variability. The unpredictable resource variability arising from the stochastic nature of meteorological phenomena (from the passage of clouds to the movement of weather systems) is of most concern for achieving high PV penetration because unlike the passage of seasons or the shift from day to night, the uncertainty makes planning a challenge. A suitable proxy for unpredictable solarmore » resource variability at any given location is the series of variations in the clearness index from one time period to the next because the clearness index is largely independent of the predictable influence of solar geometry. At timescales shorter than one day, the correlation between these variations in clearness index at pairs of distinct geographic locations decreases with spatial extent and with timescale. As the aggregate variability across N decorrelated locations decreases as 1/√N, identifying the distance required to achieve this decorrelation is critical to quantifying the expected reduction in variability from geographic dispersion.« less

  7. The Sensitivity of Orographic Precipitation to Flow Direction

    NASA Astrophysics Data System (ADS)

    Mass, C.; Picard, L.

    2015-12-01

    An area of substantial interest is the sensitivity of orographic precipitation to the characteristics of the incoming flow and to the surrounding environment. Some studies have suggested substantial sensitivity of precipitation within individual river drainages for relatively small directional or stability variations of incoming flow. A characterization of such flow sensitivity would be of great value for hydrometeorological prediction, the determination of Probable Maximum Precipitation statistics, and for quantifying the uncertainty in precipitation and hydrological forecasts. To gain insight into this problem, an idealized version of the Weather Research and Forecasting (WRF) modeling system was created in which simulations are driven by a single vertical sounding, with the assumption of thermal wind balance. The actual terrain is used and the full physics complement of the modeling system. The presentation will show how precipitation over the Olympic Mountains of Washington State varies as flow direction changes. This analysis will include both the aggregate precipitation over the barrier and the precipitation within individual drainages or areas. The role of surrounding terrain and the nearby coastline are also examined by removing these features from simulations. Finally, the impact of varying flow stability and speed on the precipitation over this orographic feature will be described.

  8. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  9. Soil aggregation and slope stability related to soil density, root length, and mycorrhiza

    NASA Astrophysics Data System (ADS)

    Graf, Frank; Frei, Martin

    2013-04-01

    Eco-engineering measures combine the use of living plants and inert mechanical constructions to protect slopes against erosion and shallow mass movement. Whereas in geotechnical engineering several performance standards and guidelines for structural safety and serviceability of construction exist, there is a lack of comparable tools in the field of ecological restoration. Various indicators have been proposed, including the fractal dimension of soil particle size distribution, microbiological parameters, and soil aggregate stability. We present results of an soil aggregate stability investigation and compare them with literature data of the angle of internal friction ?' which is conventionally used in slope stability analysis and soil failure calculation. Aggregate stability tests were performed with samples of differently treated moraine, including soil at low (~15.5 kN/m³) and high (~19.0 kN/m³) dry unit weight, soil planted with Alnus incana (White Alder) as well as the combination of soil planted with alder and inoculated with the mycorrhizal fungus Melanogaster variegatus s.l. After a 20 weeks growth period in a greenhouse, a total of 100 samples was tested and evaluated. Positive correlations were found between the soil aggregate stability and the three variables dry unit weight, root length per soil volume, and degree of mycorrhization. Based on robust statistics it turned out that dry unit weight and mycorrhization degree were strongest correlated with soil aggregate stability. Compared to the non-inoculated control plants, mycorrhized White Alder produced significantly more roots and higher soil aggregate stability. Furthermore, the combined biological effect of plant roots and mycorrhizal mycelia on aggregate stability on soil with low density (~15.5 kN/m³) was comparable to the compaction effect of the pure soil from 15.5 to ~19.0 kN/m³. Literature data on the effect of vegetation on the angle of internal friction ?' of the same moraine showed similar correlations, i.e. that ?' of low density soil material (~15.5 kN/m³) increased by the same amount whether by planting with White Alder or by compaction to ~19.0 kN/m³. Based on this coincidence the method to quantify soil aggregate produced satisfying results which indicate that soil aggregate stability is a potential proxy for ?' and the joint impact of mycorrhizal fungi and plant roots increase the resistance against superficial soil failure. It is concluded that soil aggregate stability mirrors biological effects on soil stability reasonably well and may be used as an indicator to quantify the effectiveness of ecological restoration and stabilisation measures.

  10. Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals

    NASA Technical Reports Server (NTRS)

    Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre

    2017-01-01

    The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.

  11. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  12. Aggregation of Environmental Model Data for Decision Support

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.

    2013-12-01

    Weather forecasts and warnings must be prepared and then delivered so as to reach their intended audience in good time to enable effective decision-making. An effort to mitigate these difficulties was studied at a Workshop, 'Sustaining National Meteorological Services - Strengthening WMO Regional and Global Centers' convened, June , 2013, by the World Bank, WMO and the US National Weather Service (NWS). The skill and accuracy of atmospheric forecasts from deterministic models have increased and there are now ensembles of such models that improve decisions to protect life, property and commerce. The NWS production of numerical weather prediction products result in model output from global and high resolution regional ensemble forecasts. Ensembles are constructed by changing the initial conditions to make a 'cloud' of forecasts that attempt to span the space of possible atmospheric realizations which can quantify not only the most likely forecast, but also the uncertainty. This has led to an unprecedented increase in data production and information content from higher resolution, multi-model output and secondary calculations. One difficulty is to obtain the needed subset of data required to estimate the probability of events, and report the information. The calibration required to reliably estimate the probability of events, and honing of threshold adjustments to reduce false alarms for decision makers is also needed. To meet the future needs of the ever-broadening user community and address these issues on a national and international basis, the weather service implemented the NOAA Operational Model Archive and Distribution System (NOMADS). NOMADS provides real-time and retrospective format independent access to climate, ocean and weather model data and delivers high availability content services as part of NOAA's official real time data dissemination at its new NCWCP web operations center. An important aspect of the server's abilities is to aggregate the matrix of model output offering access to probability and calibrating information for real time decision making. The aggregation content server reports over ensemble component and forecast time in addition to the other data dimensions of vertical layer and position for each variable. The unpacking, organization and reading of many binary packed files is accomplished most efficiently on the server while weather element event probability calculations, the thresholds for more accurate decision support, or display remain for the client. Our goal is to reduce uncertainty for variables of interest, e.g, agricultural importance. The weather service operational GFS model ensemble and short range ensemble forecasts can make skillful probability forecasts to alert users if and when their selected weather events will occur. A description of how this framework operates and how it can be implemented using existing NOMADS content services and applications is described.

  13. Model structures amplify uncertainty in predicted soil carbon responses to climate change.

    PubMed

    Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien

    2018-06-04

    Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.

  14. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  15. Quantifying the impact of the longitudinal dispersion coefficient parameter uncertainty on the physical transport processes in rivers

    NASA Astrophysics Data System (ADS)

    Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.

    2016-12-01

    Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.

  16. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  17. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  18. Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor

    NASA Astrophysics Data System (ADS)

    Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.

    2014-04-01

    The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.

  19. Uncertainty analysis of thermal quantities measurement in a centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Hurda, Lukáš; Matas, Richard

    2017-09-01

    Compressor performance characteristics evaluation process based on the measurement of pressure, temperature and other quantities is examined to find uncertainties for directly measured and derived quantities. CFD is used as a tool to quantify the influences of different sources of uncertainty of measurements for single- and multi-thermocouple total temperature probes. The heat conduction through the body of the thermocouple probe and the heat-up of the air in the intake piping are the main phenomena of interest.

  20. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  1. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less

  2. Quantifying multi-dimensional attributes of human activities at various geographic scales based on smartphone tracking.

    PubMed

    Zhou, Xiaolu; Li, Dongying

    2018-05-09

    Advancement in location-aware technologies, and information and communication technology in the past decades has furthered our knowledge of the interaction between human activities and the built environment. An increasing number of studies have collected data regarding individual activities to better understand how the environment shapes human behavior. Despite this growing interest, some challenges exist in collecting and processing individual's activity data, e.g., capturing people's precise environmental contexts and analyzing data at multiple spatial scales. In this study, we propose and implement an innovative system that integrates smartphone-based step tracking with an app and the sequential tile scan techniques to collect and process activity data. We apply the OpenStreetMap tile system to aggregate positioning points at various scales. We also propose duration, step and probability surfaces to quantify the multi-dimensional attributes of activities. Results show that, by running the app in the background, smartphones can measure multi-dimensional attributes of human activities, including space, duration, step, and location uncertainty at various spatial scales. By coordinating Global Positioning System (GPS) sensor with accelerometer sensor, this app can save battery which otherwise would be drained by GPS sensor quickly. Based on a test dataset, we were able to detect the recreational center and sports center as the space where the user was most active, among other places visited. The methods provide techniques to address key issues in analyzing human activity data. The system can support future studies on behavioral and health consequences related to individual's environmental exposure.

  3. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  4. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  5. Evaluation on uncertainty sources in projecting hydrological changes over the Xijiang River basin in South China

    NASA Astrophysics Data System (ADS)

    Yuan, Fei; Zhao, Chongxu; Jiang, Yong; Ren, Liliang; Shan, Hongcui; Zhang, Limin; Zhu, Yonghua; Chen, Tao; Jiang, Shanhu; Yang, Xiaoli; Shen, Hongren

    2017-11-01

    Projections of hydrological changes are associated with large uncertainties from different sources, which should be quantified for an effective implementation of water management policies adaptive to future climate change. In this study, a modeling chain framework to project future hydrological changes and the associated uncertainties in the Xijiang River basin, South China, was established. The framework consists of three emission scenarios (ESs), four climate models (CMs), four statistical downscaling (SD) methods, four hydrological modeling (HM) schemes, and four probability distributions (PDs) for extreme flow frequency analyses. Direct variance method was adopted to analyze the manner by which uncertainty sources such as ES, CM, SD, and HM affect the estimates of future evapotranspiration (ET) and streamflow, and to quantify the uncertainties of PDs in future flood and drought risk assessment. Results show that ES is one of the least important uncertainty sources in most situations. CM, in general, is the dominant uncertainty source for the projections of monthly ET and monthly streamflow during most of the annual cycle, daily streamflow below the 99.6% quantile level, and extreme low flow. SD is the most predominant uncertainty source in the projections of extreme high flow, and has a considerable percentage of uncertainty contribution in monthly streamflow projections in July-September. The effects of SD in other cases are negligible. HM is a non-ignorable uncertainty source that has the potential to produce much larger uncertainties for the projections of low flow and ET in warm and wet seasons than for the projections of high flow. PD contributes a larger percentage of uncertainty in extreme flood projections than it does in extreme low flow estimates. Despite the large uncertainties in hydrological projections, this work found that future extreme low flow would undergo a considerable reduction, and a noticeable increase in drought risk in the Xijiang River basin would be expected. Thus, the necessity of employing effective water-saving techniques and adaptive water resources management strategies for drought disaster mitigation should be addressed.

  6. Micro-structure and Swelling Behaviour of Compacted Clayey Soils: A Quantitative Approach

    NASA Astrophysics Data System (ADS)

    Ferber, Valéry; Auriol, Jean-Claude; David, Jean-Pierre

    In this paper, the clay aggregate volume and inter-aggregate volume in compacted clayey soils are quantified, on the basis of simple hypothesis, using only their water content and dry density. Swelling tests on a highly plastic clay are then interpreted by describing the influence of the inter-aggregate volume before swelling on the total volume of samples after swelling. This approach leads to a linear relation between these latter parameters. Based on these results, a description of the evolution of the microstructure due to imbibition can be proposed. Moreover, this approach enables a general quantification of the influence of initial water content and dry density on the swelling behaviour of compacted clayey soils.

  7. Impact of anthropogenic aerosols on regional climate change in Beijing, China

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Liou, K. N.; He, C.; Lee, W. L.; Gu, Y.; Li, Q.; Leung, L. R.

    2015-12-01

    Anthropogenic aerosols affect regional climate significantly through radiative (direct and semi-direct) and indirect effects, but the magnitude of these effects over megacities are subject to large uncertainty. In this study, we evaluated the effects of anthropogenic aerosols on regional climate change in Beijing, China using the online-coupled Weather Research and Forecasting/Chemistry Model (WRF/Chem) with the Fu-Liou-Gu radiation scheme and a spatial resolution of 4km. We further updated this radiation scheme with a geometric-optics surface-wave (GOS) approach for the computation of light absorption and scattering by black carbon (BC) particles in which aggregation shape and internal mixing properties are accounted for. In addition, we incorporated in WRF/Chem a 3D radiative transfer parameterization in conjunction with high-resolution digital data for city buildings and landscape to improve the simulation of boundary-layer, surface solar fluxes and associated sensible/latent heat fluxes. Preliminary simulated meteorological parameters, fine particles (PM2.5) and their chemical components agree well with observational data in terms of both magnitude and spatio-temporal variations. The effects of anthropogenic aerosols, including BC, on radiative forcing, surface temperature, wind speed, humidity, cloud water path, and precipitation are quantified on the basis of simulation results. With several preliminary sensitivity runs, we found that meteorological parameters and aerosol radiative effects simulated with the incorporation of improved BC absorption and 3-D radiation parameterizations deviate substantially from simulation results using the conventional homogeneous/core-shell configuration for BC and the plane-parallel model for radiative transfer. Understanding of the aerosol effects on regional climate change over megacities must consider the complex shape and mixing state of aerosol aggregates and 3D radiative transfer effects over city landscape.

  8. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  9. Integrating quantitative PCR and Bayesian statistics in quantifying human adenoviruses in small volumes of source water.

    PubMed

    Wu, Jianyong; Gronewold, Andrew D; Rodriguez, Roberto A; Stewart, Jill R; Sobsey, Mark D

    2014-02-01

    Rapid quantification of viral pathogens in drinking and recreational water can help reduce waterborne disease risks. For this purpose, samples in small volume (e.g. 1L) are favored because of the convenience of collection, transportation and processing. However, the results of viral analysis are often subject to uncertainty. To overcome this limitation, we propose an approach that integrates Bayesian statistics, efficient concentration methods, and quantitative PCR (qPCR) to quantify viral pathogens in water. Using this approach, we quantified human adenoviruses (HAdVs) in eighteen samples of source water collected from six drinking water treatment plants. HAdVs were found in seven samples. In the other eleven samples, HAdVs were not detected by qPCR, but might have existed based on Bayesian inference. Our integrated approach that quantifies uncertainty provides a better understanding than conventional assessments of potential risks to public health, particularly in cases when pathogens may present a threat but cannot be detected by traditional methods. © 2013 Elsevier B.V. All rights reserved.

  10. Soils as Sediment database: closing a gap between soil science and geomorphology

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus J.

    2016-04-01

    Soils are an interface between the Earth's spheres and shaped by the nature of the interaction between them. The relevance of soil properties for the nature of the interaction between atmosphere, hydrosphere and biosphere is well-studied and accepted, on point- or ecotone-scale. However, this understanding of the largely vertical connections between spheres is not matched by a similar recognition of soil properties affecting processes acting largely in a lateral way across the land surface, such as erosion, transport and deposition of soil. Key areas where such an understanding is essential are all issues related to the lateral movement of soil-bound substances that affect the nature of soils itself, as well as water or vegetation downslope from the source area. The redistribution of eroded soil falls several disciplines, most notably soil science, agronomy, hydrology and geomorphology. Accordingly, the way sediment is described differs: in soil science, aggregation and structure are essential properties, while most process-based soil erosion models treat soil as a mixture of individual mineral grains, based on concepts derived in fluvial geomorphology or civil engineering. The actual behavior of aggregated sediment is not reflected by either approach and difficult to capture due to the dynamic nature of aggregation, especially in an environment such as running water. Still, a proxy to assess the uncertainties introduced by aggregation on the behavior of soil as sediment would represent a step forward. To develop such a proxy, a database collating relevant soil and sediment properties could serve as an initial step to identify which soil types and erosion scenarios are prone to generate a high uncertainty compared to the use of soil texture in erosion models. Furthermore, it could serve to develop standardized analytical procedures for appropriate description of soil as sediment.

  11. Characterization of Nanoparticle Aggregation in Biologically Relevant Fluids

    NASA Astrophysics Data System (ADS)

    McEnnis, Kathleen; Lahann, Joerg

    Nanoparticles (NPs) are often studied as drug delivery vehicles, but little is known about their behavior in blood once injected into animal models. If the NPs aggregate in blood, they will be shunted to the liver or spleen instead of reaching the intended target. The use of animals for these experiments is costly and raises ethical questions. Typically dynamic light scattering (DLS) is used to analyze aggregation behavior, but DLS cannot be used because the components of blood also scatter light. As an alternative, a method of analyzing NPs in biologically relevant fluids such as blood plasma has been developed using nanoparticle tracking analysis (NTA) with fluorescent filters. In this work, NTA was used to analyze the aggregation behavior of fluorescent polystyrene NPs with different surface modifications in blood plasma. It was expected that different surface chemistries on the particles will change the aggregation behavior. The effect of the surface modifications was investigated by quantifying the percentage of NPs in aggregates after addition to blood plasma. The use of this characterization method will allow for better understanding of particle behavior in the body, and potential problems, specifically aggregation, can be addressed before investing in in vivo studies.

  12. An energy landscape approach to protein aggregation

    NASA Astrophysics Data System (ADS)

    Buell, Alexander; Knowles, Tuomas

    2012-02-01

    Protein aggregation into ordered fibrillar structures is the hallmark of a class of diseases, the most prominent examples of which are Alzheimer's and Parkinson's disease. Recent results (e.g. Baldwin et al. J. Am. Chem. Soc. 2011) suggest that the aggregated state of a protein is in many cases thermodynamically more stable than the soluble state. Therefore the solubility of proteins in a cellular context appears to be to a large extent under kinetic control. Here, we first present a conceptual framework for the description of protein aggregation ( see AK Buell et al., Phys. Rev. Lett. 2010) that is an extension to the generally accepted energy landscape model for protein folding. Then we apply this model to analyse and interpret a large set of experimental data on the kinetics of protein aggregation, acquired mainly with a novel biosensing approach (see TPJK Knowles et al, Proc. Nat. Acad. Sc. 2007). We show how for example the effect of sequence modifications on the kinetics and thermodynamics of human lysozyme aggregation can be understood and quantified (see AK Buell et al., J. Am. Chem. Soc. 2011). These results have important implications for therapeutic strategies against protein aggregation disorders, in this case lysozyme systemic amyloidosis.

  13. Detection of IgG aggregation by a high throughput method based on extrinsic fluorescence.

    PubMed

    He, Feng; Phan, Duke H; Hogan, Sabine; Bailey, Robert; Becker, Gerald W; Narhi, Linda O; Razinkov, Vladimir I

    2010-06-01

    The utility of extrinsic fluorescence as a tool for high throughput detection of monoclonal antibody aggregates was explored. Several IgG molecules were thermally stressed and the high molecular weight species were fractionated using size-exclusion chromatography (SEC). The isolated aggregates and monomers were studied by following the fluorescence of an extrinsic probe, SYPRO Orange. The dye displayed high sensitivity to structurally altered, aggregated IgG structures compared to the native form, which resulted in very low fluorescence in the presence of the dye. An example of the application is presented here to demonstrate the properties of this detection method. The fluorescence assay was shown to correlate with the SEC method in quantifying IgG aggregates. The fluorescent probe method appears to have potential to detect protein particles that could not be analyzed by SEC. This method may become a powerful high throughput tool to detect IgG aggregates in pharmaceutical solutions and to study other protein properties involving aggregation. It can also be used to study the kinetics of antibody particle formation, and perhaps allow identification of the species, which are the early building blocks of protein particles. (c) 2009 Wiley-Liss, Inc. and the American Pharmacists Association

  14. Seasonal distribution, aggregation, and habitat selection of common carp in Clear Lake, Iowa

    USGS Publications Warehouse

    Penne, C.R.; Pierce, C.L.

    2008-01-01

    The common carp Cyprinus carpio is widely distributed and frequently considered a nuisance species outside its native range. Common carp are abundant in Clear Lake, Iowa, where their presence is both a symptom of degradation and an impediment to improving water quality and the sport fishery. We used radiotelemetry to quantify seasonal distribution, aggregation, and habitat selection of adult and subadult common carp in Clear Lake during 2005-2006 in an effort to guide future control strategies. Over a 22-month period, we recorded 1,951 locations of 54 adults and 60 subadults implanted with radio transmitters. Adults demonstrated a clear tendency to aggregate in an offshore area during the late fall and winter and in shallow, vegetated areas before and during spring spawning. Late-fall and winter aggregations were estimated to include a larger percentage of the tracked adults than spring aggregations. Subadults aggregated in shallow, vegetated areas during the spring and early summer. Our study, when considered in combination with previous research, suggests repeatable patterns of distribution, aggregation, and habitat selection that should facilitate common carp reduction programs in Clear Lake and similar systems. ?? Copyright by the American Fisheries Society 2008.

  15. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  16. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  17. Improvement of the intersection method for the quantification of filamentous organisms: basis and practice for bulking and foaming bioindication purposes.

    PubMed

    Salvadó, Humbert

    2016-09-01

    Bulking and foaming phenomena in activated sludge wastewater treatment plants are in most cases related to the abundance of filamentous microorganisms. Quantifying these microorganisms should be a preliminary stage in their control. In this paper, the simplicity of quantifying them based on the intersection method is demonstrated, by redescribing the theory and applying a new improved protocol; new data of interest are also provided. The improved method allows us to use it with stained smears, including epifluorescence techniques. The error that could be made, when considering the distribution of filamentous bacteria in fresh microscope preparations in two dimensions rather than three is negligible. The effect of the different types of filamentous microorganisms on the settleability was also studied. The effect of the total extended filament length on the sludge settleability was shown to depend on the type of filamentous organism and how it aggregates. When these groups of filamentous organisms are found in small aggregations and there is an increase in the number of filamentous organisms, the sludge volume index (SVI) increases proportionally to the filament length. However, when aggregation increases, the impact on the SVI is significantly lower.

  18. Marine floc strength and breakup response in turbulent flow

    NASA Astrophysics Data System (ADS)

    Rau, Matthew; Ackleson, Steven; Smith, Geoffrey

    2017-11-01

    The effect of turbulence on marine floc formation and breakup is studied experimentally using a recirculating breakup facility. Flocs of bentonite clay particles are grown in a large, stirred aggregation tank of salt water (salinity of 10 ppt) before being subjected to fully-developed pipe flow. Pipe flow conditions range from laminar to turbulent with dissipation rates up to 2.1 m2/s3. Particle size distributions are measured through in-situ sampling of the small-angle forward volume scattering function and through microscopic imaging. Floc size is compared before and after exposure to turbulence and found to be a strong function of the dissipation rate of turbulent kinetic energy. Hydrodynamic conditions within the aggregation tank have a large influence on overall floc strength; flocs formed with stirred aggregation resist breakup compared to flocs formed without stirring. Floc shape and structure statistics are quantified through image analysis and the results are discussed in relation to the measured floc breakup response. Finally, the relevance of these findings to quantifying and predicting marine floc dynamics and the eventual fate of particles in the ocean is presented. The authors thank the National Research Council Postdoctoral Program for their support of this work.

  19. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  20. The case for probabilistic forecasting in hydrology

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman

    2001-08-01

    That forecasts should be stated in probabilistic, rather than deterministic, terms has been argued from common sense and decision-theoretic perspectives for almost a century. Yet most operational hydrological forecasting systems produce deterministic forecasts and most research in operational hydrology has been devoted to finding the 'best' estimates rather than quantifying the predictive uncertainty. This essay presents a compendium of reasons for probabilistic forecasting of hydrological variates. Probabilistic forecasts are scientifically more honest, enable risk-based warnings of floods, enable rational decision making, and offer additional economic benefits. The growing demand for information about risk and the rising capability to quantify predictive uncertainties create an unparalleled opportunity for the hydrological profession to dramatically enhance the forecasting paradigm.

  1. Real-time Social Internet Data to Guide Forecasting Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Valle, Sara Y.

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematicalmore » approaches and heterogeneous data streams.« less

  2. Physico-chemical protection, rather than biochemical composition, governs the responses of soil organic carbon decomposition to nitrogen addition in a temperate agroecosystem.

    PubMed

    Tan, Wenbing; Wang, Guoan; Huang, Caihong; Gao, Rutai; Xi, Beidou; Zhu, Biao

    2017-11-15

    The heterogeneous responses of soil organic carbon (SOC) decomposition in different soil fractions to nitrogen (N) addition remain elusive. In this study, turnover rates of SOC in different aggregate fractions were quantified based on changes in δ 13 C following the conversion of C 3 to C 4 vegetation in a temperate agroecosystem. The turnover of both total organic matter and specific organic compound classes within each aggregate fraction was inhibited by N addition. Moreover, the intensity of inhibition increases with decreasing aggregate size and increasing N addition level, but does not vary among chemical compound classes within each aggregate fraction. Overall, the response of SOC decomposition to N addition is dependent on the physico-chemical protection of SOC by aggregates and minerals, rather than the biochemical composition of organic substrates. The results of this study could help to understand the fate of SOC in the context of increasing N deposition. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Uncertainty information in climate data records from Earth observation

    NASA Astrophysics Data System (ADS)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.

  4. Aggregation Kinetics of Diesel Soot Nanoparticles in Wet Environments.

    PubMed

    Chen, Chengyu; Huang, Weilin

    2017-02-21

    Soot produced during incomplete combustion consists mainly of carbonaceous nanoparticles (NPs) with severe adverse environmental and health effects, and its environmental fate and transport are largely controlled by aggregation. In this study, we examined the aggregation behavior for diesel soot NPs under aqueous condition in an effort to elucidate the fundamental processes that govern soot particle-particle interactions in wet environments such as rain droplets or surface aquatic systems. The influence of electrolytes and aqueous pH on colloidal stability of these NPs was investigated by measuring their aggregation kinetics in different aqueous solution chemistries. The results showed that the NPs had negatively charged surfaces and exhibited both reaction- and diffusion-limited aggregation regimes with rates depended upon solution chemistry. The aggregation kinetics data were in good agreement with the classic Derjaguin-Landau-Verwey-Overbeek (DLVO) theory. The critical coagulation concentrations (CCC) were quantified and the Hamaker constant was derived for the soot (1.4 × 10 -20 J) using the colloidal chemistry approach. The study indicated that, depending upon local aqueous chemistry, single soot NPs could remain stable against self-aggregation in typical freshwater environments and in neutral cloud droplets but are likely to aggregate under salty (e.g., estuaries) or acidic (e.g., acid rain droplets) aquatic conditions or both.

  5. Inducing protein aggregation by extensional flow

    PubMed Central

    Dobson, John; Kumar, Amit; Willis, Leon F.; Tuma, Roman; Higazi, Daniel R.; Turner, Richard; Lowe, David C.; Ashcroft, Alison E.; Radford, Sheena E.; Kapur, Nikil

    2017-01-01

    Relative to other extrinsic factors, the effects of hydrodynamic flow fields on protein stability and conformation remain poorly understood. Flow-induced protein remodeling and/or aggregation is observed both in Nature and during the large-scale industrial manufacture of proteins. Despite its ubiquity, the relationships between the type and magnitude of hydrodynamic flow, a protein’s structure and stability, and the resultant aggregation propensity are unclear. Here, we assess the effects of a defined and quantified flow field dominated by extensional flow on the aggregation of BSA, β2-microglobulin (β2m), granulocyte colony stimulating factor (G-CSF), and three monoclonal antibodies (mAbs). We show that the device induces protein aggregation after exposure to an extensional flow field for 0.36–1.8 ms, at concentrations as low as 0.5 mg mL−1. In addition, we reveal that the extent of aggregation depends on the applied strain rate and the concentration, structural scaffold, and sequence of the protein. Finally we demonstrate the in situ labeling of a buried cysteine residue in BSA during extensional stress. Together, these data indicate that an extensional flow readily unfolds thermodynamically and kinetically stable proteins, exposing previously sequestered sequences whose aggregation propensity determines the probability and extent of aggregation. PMID:28416674

  6. Molecular nonlinear dynamics and protein thermal uncertainty quantification

    PubMed Central

    Xia, Kelin; Wei, Guo-Wei

    2014-01-01

    This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction. PMID:24697365

  7. Decomposing Trends in Inequality in Earnings into Forecastable and Uncertain Components

    PubMed Central

    Cunha, Flavio; Heckman, James

    2015-01-01

    A substantial empirical literature documents the rise in wage inequality in the American economy. It is silent on whether the increase in inequality is due to components of earnings that are predictable by agents or whether it is due to greater uncertainty facing them. These two sources of variability have different consequences for both aggregate and individual welfare. Using data on two cohorts of American males we find that a large component of the rise in inequality for less skilled workers is due to uncertainty. For skilled workers, the rise is less pronounced. PMID:27087741

  8. Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments

    NASA Astrophysics Data System (ADS)

    Kirchner, J. W.

    2016-01-01

    Environmental heterogeneity is ubiquitous, but environmental systems are often analyzed as if they were homogeneous instead, resulting in aggregation errors that are rarely explored and almost never quantified. Here I use simple benchmark tests to explore this general problem in one specific context: the use of seasonal cycles in chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) to estimate timescales of storage in catchments. Timescales of catchment storage are typically quantified by the mean transit time, meaning the average time that elapses between parcels of water entering as precipitation and leaving again as streamflow. Longer mean transit times imply greater damping of seasonal tracer cycles. Thus, the amplitudes of tracer cycles in precipitation and streamflow are commonly used to calculate catchment mean transit times. Here I show that these calculations will typically be wrong by several hundred percent, when applied to catchments with realistic degrees of spatial heterogeneity. This aggregation bias arises from the strong nonlinearity in the relationship between tracer cycle amplitude and mean travel time. I propose an alternative storage metric, the young water fraction in streamflow, defined as the fraction of runoff with transit times of less than roughly 0.2 years. I show that this young water fraction (not to be confused with event-based "new water" in hydrograph separations) is accurately predicted by seasonal tracer cycles within a precision of a few percent, across the entire range of mean transit times from almost zero to almost infinity. Importantly, this relationship is also virtually free from aggregation error. That is, seasonal tracer cycles also accurately predict the young water fraction in runoff from highly heterogeneous mixtures of subcatchments with strongly contrasting transit-time distributions. Thus, although tracer cycle amplitudes yield biased and unreliable estimates of catchment mean travel times in heterogeneous catchments, they can be used to reliably estimate the fraction of young water in runoff.

  9. Technology Assessment Requirements for Programs and Projects

    NASA Technical Reports Server (NTRS)

    Bilbro, James W.

    2006-01-01

    Program/project uncertainty can most simply be defined as the unpredictability of its outcome. As might be expected, the degree of uncertainty depends substantially on program/project type. For hi-tech programs/projects, uncertainty all too frequently translates into schedule slips, cost overruns and occasionally even to cancellations or failures - consummations root cause of such events is often attributed to inadequate definition of requirements. If such were indeed the root cause, then correcting the situation would simply be a matter of requiring better requirements definition, but since history seems frequently to repeat itself, this must not be the case - at least not in total. There are in fact many contributors to schedule slips, cost overruns, project cancellations and failures, among them lack of adequate requirements definition. The case can be made, however, that many of these contributors are related to the degree of uncertainty at the outset of the project. And further, that a dominant factor in the degree of uncertainty is the maturity of the technology required to bring the project to fruition. This presentation discusses the concept of relating degrees of uncertainty to Technology Readiness Levels (TRL) and their associated Advancement Degree of Difficulty (AD2) levels. It also briefly describes a quantifiable process to establish the appropriate TRL for a given technology and quantifies through the AD2 what is required to move it from its current TRL to the desired TRL in order to reduce risk and maximize likelihood of successfully infusing the technology.

  10. Physical Mechanisms Driving Cell Sorting in Hydra.

    PubMed

    Cochet-Escartin, Olivier; Locke, Tiffany T; Shi, Winnie H; Steele, Robert E; Collins, Eva-Maria S

    2017-12-19

    Cell sorting, whereby a heterogeneous cell mixture organizes into distinct tissues, is a fundamental patterning process in development. Hydra is a powerful model system for carrying out studies of cell sorting in three dimensions, because of its unique ability to regenerate after complete dissociation into individual cells. The physicists Alfred Gierer and Hans Meinhardt recognized Hydra's self-organizing properties more than 40 years ago. However, what drives cell sorting during regeneration of Hydra from cell aggregates is still debated. Differential motility and differential adhesion have been proposed as driving mechanisms, but the available experimental data are insufficient to distinguish between these two. Here, we answer this longstanding question by using transgenic Hydra expressing fluorescent proteins and a multiscale experimental and numerical approach. By quantifying the kinematics of single cell and whole aggregate behaviors, we show that no differences in cell motility exist among cell types and that sorting dynamics follow a power law with an exponent of ∼0.5. Additionally, we measure the physical properties of separated tissues and quantify their viscosities and surface tensions. Based on our experimental results and numerical simulations, we conclude that tissue interfacial tensions are sufficient to explain cell sorting in aggregates of Hydra cells. Furthermore, we demonstrate that the aggregate's geometry during sorting is key to understanding the sorting dynamics and explains the exponent of the power law behavior. Our results answer the long standing question of the physical mechanisms driving cell sorting in Hydra cell aggregates. In addition, they demonstrate how powerful this organism is for biophysical studies of self-organization and pattern formation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. On the uncertainty of interdisciplinarity measurements due to incomplete bibliographic data.

    PubMed

    Calatrava Moreno, María Del Carmen; Auzinger, Thomas; Werthner, Hannes

    The accuracy of interdisciplinarity measurements is directly related to the quality of the underlying bibliographic data. Existing indicators of interdisciplinarity are not capable of reflecting the inaccuracies introduced by incorrect and incomplete records because correct and complete bibliographic data can rarely be obtained. This is the case for the Rao-Stirling index, which cannot handle references that are not categorized into disciplinary fields. We introduce a method that addresses this problem. It extends the Rao-Stirling index to acknowledge missing data by calculating its interval of uncertainty using computational optimization. The evaluation of our method indicates that the uncertainty interval is not only useful for estimating the inaccuracy of interdisciplinarity measurements, but it also delivers slightly more accurate aggregated interdisciplinarity measurements than the Rao-Stirling index.

  12. Management of uncertainties on parameters elicited by experts - Applications to sea-level rise and to CO2 storage operations risk assessment

    NASA Astrophysics Data System (ADS)

    Manceau, Jean-Charles; Loschetter, Annick; Rohmer, Jérémy; Le Cozannet, Gonéri; Lary Louis, de; Guénan Thomas, Le; Ken, Hnottavange-Telleen

    2017-04-01

    In a context of high degree of uncertainty, when very few data are available, experts are commonly requested to provide their opinions on input parameters of risk assessment models. Not only might each expert express a certain degree of uncertainty on his/her own statements, but the set of information collected from the pool of experts introduces an additional level of uncertainty. It is indeed very unlikely that all experts agree on exactly the same data, especially regarding parameters needed for natural risk assessments. In some cases, their opinions may differ only slightly (e.g. the most plausible value for a parameter is similar for different experts, and they only disagree on the level of uncertainties that taint the said value) while on other cases they may express incompatible opinions for a same parameter. Dealing with these different kinds of uncertainties remains a challenge for assessing geological hazards or/and risks. Extra-probabilistic approaches (such as the Dempster-Shafer theory or the possibility theory) have shown to offer promising solutions for representing parameters on which the knowledge is limited. It is the case for instance when the available information prevents an expert from identifying a unique probability law to picture the total uncertainty. Moreover, such approaches are known to be particularly flexible when it comes to aggregating several and potentially conflicting opinions. We therefore propose to discuss the opportunity of applying these new theories for managing the uncertainties on parameters elicited by experts, by a comparison with the application of more classical probability approaches. The discussion is based on two different examples. The first example deals with the estimation of the injected CO2 plume extent in a reservoir in the context of CO2 geological storage. This estimation requires information on the effective porosity of the reservoir, which has been estimated by 14 different experts. The Dempster-Shafer theory has been used to represent and aggregate these pieces of information. The results of different aggregation rules as well as those of a classical probabilistic approach are compared with the purpose of highlighting the elements each of them could provide to the decision-maker (Manceau et al., 2016). The second example focuses on projections of future sea-level rise. Based on IPCC's constraints on the projection quantiles, and on the scientific community consensus level on the physical limits to future sea-level rise, a possibility distribution of the projections by 2100 under the RCP 8.5 scenario has been established. This possibility distribution has been confronted with a set of previously published probabilistic sea-level projections, with a focus on their ability to explore high ranges of sea-level rise (Le Cozannet et al., 2016). These two examples are complementary in the sense that they allow to address various aspects of the problem (e.g. representation of different types of information, conflict among experts, sources dependence). Moreover, we believe that the issues faced during these two experiences can be generalized to many risks/hazards assessment situations. References Manceau, JC., Loschetter, A., Rohmer, J., de Lary, L., Le Guénan, T., Hnottavange-Telleen, K. (2016). Dealing with uncertainty on parameters elicited from a pool of experts for CCS risk assessment. Congrès λμ 20 (St-Malo, France). Le Cozannet G., Manceau JC., Rohmer, J. (2016). Bounding probabilistic sea-level rise projections within the framework of the possibility theory. Accepted in Environmental Research Letters.

  13. Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien

    2015-04-01

    Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a sensitivity analysis to the fixed parameters of the streamgauging technique remain very useful for estimating the uncertainty related to the (non quantified) bias correction. In the absence of a reference, the uncertainty estimate is referenced to the average of all discharge measurements in the interlaboratory experiment, ignoring the technique bias. Simple equations can be used to assess the uncertainty of the uncertainty results, as a function of the number of participants and of repeated measurements. The interlaboratory method was applied to several interlaboratory experiments on ADCPs and currentmeters mounted on wading rods, in streams of different sizes and aspects, with 10 to 30 instruments, typically. The uncertainty results were consistent with the usual expert judgment and highly depended on the measurement environment. Approximately, the expanded uncertainties (within the 95% probability interval) were ±5% to ±10% for ADCPs in good or poor conditions, and ±10% to ±15% for currentmeters in shallow creeks. Due to the specific limitations related to a slow measurement process and to small, natural streams, uncertainty results for currentmeters were more uncertain than for ADCPs, for which the site-specific errors were significantly evidenced. The proposed method can be applied to a wide range of interlaboratory experiments conducted in contrasted environments for different streamgauging techniques, in a standardized way. Ideally, an international open database would enhance the investigation of hydrological data uncertainties, according to the characteristics of the measurement conditions and procedures. Such a dataset could be used for implementing and validating uncertainty propagation methods in hydrometry.

  14. Hydroxyl radical-PLIF measurements and accuracy investigation in high pressure gaseous hydrogen/gaseous oxygen combustion

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Aravind

    In-flow species concentration measurements in reacting flows at high pressures are needed both to improve the current understanding of the physical processes taking place and to validate predictive tools that are under development, for application to the design and optimization of a range of power plants from diesel to rocket engines. To date, non intrusive measurements have been based on calibrations determined from assumptions that were not sufficiently quantified to provide a clear understanding of the range of uncertainty associated with these measurements. The purpose of this work is to quantify the uncertainties associated with OH measurement in a oxygen-hydrogen system produced by a shear, coaxial injector typical of those used in rocket engines. Planar OH distributions are obtained providing instantaneous and averaged distribution that are required for both LES and RANS codes currently under development. This study has evaluated the uncertainties associated with OH measurement at 10, 27, 37 and 53 bar respectively. The total rms error for OH-PLIF measurements from eighteen different parameters was quantified and found as 21.9, 22.8, 22.5, and 22.9% at 10, 27, 37 and 53 bar respectively. These results are used by collaborators at Georgia Institute of Technology (LES), Pennsylvania State University (LES), University of Michigan (RANS) and NASA Marshall (RANS).

  15. Qalibra: a general model for food risk-benefit assessment that quantifies variability and uncertainty.

    PubMed

    Hart, Andy; Hoekstra, Jeljer; Owen, Helen; Kennedy, Marc; Zeilmaker, Marco J; de Jong, Nynke; Gunnlaugsdottir, Helga

    2013-04-01

    The EU project BRAFO proposed a framework for risk-benefit assessment of foods, or changes in diet, that present both potential risks and potential benefits to consumers (Hoekstra et al., 2012a). In higher tiers of the BRAFO framework, risks and benefits are integrated quantitatively to estimate net health impact measured in DALYs or QALYs (disability- or quality-adjusted life years). This paper describes a general model that was developed by a second EU project, Qalibra, to assist users in conducting these assessments. Its flexible design makes it applicable to a wide range of dietary questions involving different nutrients, contaminants and health effects. Account can be taken of variation between consumers in their diets and also other characteristics relevant to the estimation of risk and benefit, such as body weight, gender and age. Uncertainty in any input parameter may be quantified probabilistically, using probability distributions, or deterministically by repeating the assessment with alternative assumptions. Uncertainties that are not quantified should be evaluated qualitatively. Outputs produced by the model are illustrated using results from a simple assessment of fish consumption. More detailed case studies on oily fish and phytosterols are presented in companion papers. The model can be accessed as web-based software at www.qalibra.eu. Copyright © 2012. Published by Elsevier Ltd.

  16. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions and numerous stations, depending on the density of station/data coverage. Due to these dependencies, the letter grade can change with subsequent ShakeMap revisions if more data are added or when finite-faulting dimensions are added. We emphasize that the greatest uncertainties are associated with unconstrained source dimensions for large earthquakes where the distance term in the GMPE is most uncertain; this uncertainty thus scales with magnitude (and consequently rupture dimension). Since this distance uncertainty produces potentially large uncertainties in ShakeMap ground-motion estimates, this factor dominates over compensating constraints for all but the most dense station distributions.

  17. The biotic ligand model approach for addressing effects of exposure water chemistry on aquatic toxicity of metals: Genesis and challenges

    EPA Science Inventory

    A major uncertainty in many aquatic risk assessments for toxic chemicals is the aggregate effect of the physicochemical characteristics of exposure media on toxicity, and how this affects extrapolation of laboratory test results to natural systems. A notable example of this is h...

  18. Applying Aggregate Exposure Pathway and Adverse Outcome Pathway frameworks to link toxicity testing data to exposure-relevant and biologically-relevant responses

    EPA Science Inventory

    Hazard assessment for nanomaterials often involves applying in vitro dose-response data to estimate potential health risks that arise from exposure to products that contain nanomaterials. However, much uncertainty is inherent in relating bioactivities observed in an in vitro syst...

  19. Classifying Human Activity Patterns from Smartphone Collected GPS data: a Fuzzy Classification and Aggregation Approach.

    PubMed

    Wan, Neng; Lin, Ge

    2016-12-01

    Smartphones have emerged as a promising type of equipment for monitoring human activities in environmental health studies. However, degraded location accuracy and inconsistency of smartphone-measured GPS data have limited its effectiveness for classifying human activity patterns. This study proposes a fuzzy classification scheme for differentiating human activity patterns from smartphone-collected GPS data. Specifically, a fuzzy logic reasoning was adopted to overcome the influence of location uncertainty by estimating the probability of different activity types for single GPS points. Based on that approach, a segment aggregation method was developed to infer activity patterns, while adjusting for uncertainties of point attributes. Validations of the proposed methods were carried out based on a convenient sample of three subjects with different types of smartphones. The results indicate desirable accuracy (e.g., up to 96% in activity identification) with use of this method. Two examples were provided in the appendix to illustrate how the proposed methods could be applied in environmental health studies. Researchers could tailor this scheme to fit a variety of research topics.

  20. Strategic interactions, affective reactions, and fast adaptations.

    PubMed

    Kareev, Yaakov; Avrahami, Judith; Fiedler, Klaus

    2014-06-01

    We studied repeated choices under uncertainty in situations in which the source of uncertainty is the choice of an interaction partner. In 1 experiment the participants engaged in repeated decisions in a mixed motive game; in another experiment the options and outcomes were identical to those in the 1st, but periods of the mixed-motive game alternated with periods of a coordination game, with the change in period not announced. We analyzed choice dynamics-the relationship between an outcome and the choice that followed-and aggregate choice probabilities to gauge the relative merit of reward-based or affect-based accounts (the affects considered being disappointment and regret). In both experiments choice dynamics were essentially identical and were compatible with only the regret-based account. This was true irrespective of the game played or the stage (early or late) of the game. Moreover, the same dynamics explained the very different aggregate probabilities with which the 2 options were chosen in the 2 games and the remarkably fast adaptations to unannounced changes in the game played. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Uncertainty in flood damage estimates and its potential effect on investment decisions

    NASA Astrophysics Data System (ADS)

    Wagenaar, Dennis; de Bruijn, Karin; Bouwer, Laurens; de Moel, Hans

    2015-04-01

    This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. This Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. This uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.

  2. Uncertainty in flood damage estimates and its potential effect on investment decisions

    NASA Astrophysics Data System (ADS)

    Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.

    2015-01-01

    This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.

  3. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  4. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.

  5. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Treesearch

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  6. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less

  7. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    PubMed

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  8. Application of Electron Backscatter Diffraction to evaluate the ASR risk of concrete aggregates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rößler, C., E-mail: christiane.roessler@uni-weimar.de; Möser, B.; Giebson, C.

    Alkali-Silica Reaction (ASR) is a frequent cause of reduced concrete durability. Eliminating the application of alkali reactive aggregates would reduce the quantity of ASR concrete deterioration in the field. This study introduces an Electron Backscatter Diffraction (EBSD) technique to distinguish the ASR risk of slow-late reacting aggregates by measuring microstructural properties of quartz. Quantifying the amount of quartz grain boundaries and the associated misorientation of grains can thereby be used to differentiate microstructures bearing an ASR risk. It is also shown that dissolution of quartz in high pH environments occurs along quartz grain and subgrain boundaries. Results of EBSD analysismore » are compared with ASR performance testing on concrete prisms and optical light microscopy characterization of quartz microstructure. EBSD opens new possibilities to quantitatively characterize microstructure of quartz in concrete aggregates with respect to ASR. This leads to a better understanding on the actual cause of ASR.« less

  9. Ballistic aggregation in systems of inelastic particles: Cluster growth, structure, and aging

    NASA Astrophysics Data System (ADS)

    Paul, Subhajit; Das, Subir K.

    2017-07-01

    We study far-from-equilibrium dynamics in models of freely cooling granular gas and ballistically aggregating compact clusters. For both the cases, from event-driven molecular dynamics simulations, we have presented detailed results on structure and dynamics in space dimensions d =1 and 2. Via appropriate analyses it has been confirmed that the ballistic aggregation mechanism applies in d =1 granular gases as well. Aging phenomena for this mechanism, in both the dimensions, have been studied via the two-time density autocorrelation function. This quantity is demonstrated to exhibit scaling property similar to that in the standard phase transition kinetics. The corresponding functional forms have been quantified and the outcomes have been discussed in connection with the structural properties. Our results on aging establish a more complete equivalence between the granular gas and the ballistic aggregation models in d =1 .

  10. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  11. Quantifying parameter uncertainty in stochastic models using the Box Cox transformation

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Kuczera, George; Wang, Q. J.

    2002-08-01

    The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.

  12. Communicating Geographical Risks in Crisis Management: The Need for Research.

    PubMed

    French, Simon; Argyris, Nikolaos; Haywood, Stephanie M; Hort, Matthew C; Smith, Jim Q

    2017-10-23

    In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios. © 2017 Society for Risk Analysis.

  13. Quantitative Analysis of Uncertainty in Medical Reporting: Creating a Standardized and Objective Methodology.

    PubMed

    Reiner, Bruce I

    2018-04-01

    Uncertainty in text-based medical reports has long been recognized as problematic, frequently resulting in misunderstanding and miscommunication. One strategy for addressing the negative clinical ramifications of report uncertainty would be the creation of a standardized methodology for characterizing and quantifying uncertainty language, which could provide both the report author and reader with context related to the perceived level of diagnostic confidence and accuracy. A number of computerized strategies could be employed in the creation of this analysis including string search, natural language processing and understanding, histogram analysis, topic modeling, and machine learning. The derived uncertainty data offers the potential to objectively analyze report uncertainty in real time and correlate with outcomes analysis for the purpose of context and user-specific decision support at the point of care, where intervention would have the greatest clinical impact.

  14. Cultured Construction: Global Evidence of the Impact of National Values on Piped-to-Premises Water Infrastructure Development.

    PubMed

    Kaminsky, Jessica A

    2016-07-19

    In 2016, the global community undertook the Sustainable Development Goals. One of these goals seeks to achieve universal and equitable access to safe and affordable drinking water for all people by the year 2030. In support of this undertaking, this paper seeks to discover the cultural work done by piped water infrastructure across 33 nations with developed and developing economies that have experienced change in the percentage of population served by piped-to-premises water infrastructure at the national level of analysis. To do so, I regressed the 1990-2012 change in piped-to-premises water infrastructure coverage against Hofstede's cultural dimensions, controlling for per capita GDP, the 1990 baseline level of coverage, percent urban population, overall 1990-2012 change in improved sanitation (all technologies), and per capita freshwater resources. Separate analyses were carried out for the urban, rural, and aggregate national contexts. Hofstede's dimensions provide a measure of cross-cultural difference; high or low scores are not in any way intended to represent better or worse but rather serve as a quantitative way to compare aggregate preferences for ways of being and doing. High scores in the cultural dimensions of Power Distance, Individualism-Collectivism, and Uncertainty Avoidance explain increased access to piped-to-premises water infrastructure in the rural context. Higher Power Distance and Uncertainty Avoidance scores are also statistically significant for increased coverage in the urban and national aggregate contexts. These results indicate that, as presently conceived, piped-to-premises water infrastructure fits best with spatial contexts that prefer hierarchy and centralized control. Furthermore, water infrastructure is understood to reduce uncertainty regarding the provision of individually valued benefits. The results of this analysis identify global trends that enable engineers and policy makers to design and manage more culturally appropriate and socially sustainable water infrastructure by better fitting technologies to user preferences.

  15. A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.

    PubMed

    Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E

    2016-06-21

    We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.

  16. Multi-criteria Resource Mapping and its Relevance in the Assessment of Habitat Changes

    NASA Astrophysics Data System (ADS)

    Van Lancker, V. R.; Kint, L.; van Heteren, S.

    2016-02-01

    Mineral and geological resources can be considered to be non-renewable on time scales relevant for decision makers. Once exhausted by humans, they are not replenished rapidly enough by nature, meaning that truly sustainable resource exploitation is not possible. Comprehensive knowledge on the distribution, composition and dynamics of geological resources and on the environmental impact of aggregate extraction is therefore critical. For the Belgian and southern Netherlands part of the North Sea, being representative of a typical sandbank system, a 4D resource decision-support system is being developed that links 3D geological models with environmental impact models. Aim is to quantify natural and man-made changes and to define from these sustainable exploitation thresholds. These are needed to ensure that recovery from perturbations is rapid and secure, and that the range of natural variation is maintained, a prerequisite stated in Europe's Marine Strategy Framework Directive, the environmental pillar of Europe's Maritime Policy. The geological subsurface is parameterised using a voxel modelling approach. Primarily, the voxels, or volume blocks of information, are constrained by the geology, based on coring and seismic data, but they are open to any resource-relevant information. The primary geological data entering the voxels are subdued to uncertainty modelling, a necessary step to produce data products with confidence limits. The presentation will focus on the novelty this approach brings for seabed and habitat mapping. In our model this is the upper voxel, providing the advantage of having a dynamical coupling to the geology and a suite of environmental parameters. In the context of assessing habitat changes, this coupling enables to account for spatial and temporal variability, seabed heterogeneity, as well as data uncertainty. The project is funded by Belgian Science Policy and is further valorised through EMODnet-Geology (DG MARE).

  17. Medical Geography: a Promising Field of Application for Geostatistics

    PubMed Central

    Goovaerts, P.

    2008-01-01

    The analysis of health data and putative covariates, such as environmental, socio-economic, behavioral or demographic factors, is a promising application for geostatistics. It presents, however, several methodological challenges that arise from the fact that data are typically aggregated over irregular spatial supports and consist of a numerator and a denominator (i.e. population size). This paper presents an overview of recent developments in the field of health geostatistics, with an emphasis on three main steps in the analysis of areal health data: estimation of the underlying disease risk, detection of areas with significantly higher risk, and analysis of relationships with putative risk factors. The analysis is illustrated using age-adjusted cervix cancer mortality rates recorded over the 1970–1994 period for 118 counties of four states in the Western USA. Poisson kriging allows the filtering of noisy mortality rates computed from small population sizes, enhancing the correlation with two putative explanatory variables: percentage of habitants living below the federally defined poverty line, and percentage of Hispanic females. Area-to-point kriging formulation creates continuous maps of mortality risk, reducing the visual bias associated with the interpretation of choropleth maps. Stochastic simulation is used to generate realizations of cancer mortality maps, which allows one to quantify numerically how the uncertainty about the spatial distribution of health outcomes translates into uncertainty about the location of clusters of high values or the correlation with covariates. Last, geographically-weighted regression highlights the non-stationarity in the explanatory power of covariates: the higher mortality values along the coast are better explained by the two covariates than the lower risk recorded in Utah. PMID:19412347

  18. Assessing uncertainty in SRTM elevations for global flood modelling

    NASA Astrophysics Data System (ADS)

    Hawker, L. P.; Rougier, J.; Neal, J. C.; Bates, P. D.

    2017-12-01

    The SRTM DEM is widely used as the topography input to flood models in data-sparse locations. Understanding spatial error in the SRTM product is crucial in constraining uncertainty about elevations and assessing the impact of these upon flood prediction. Assessment of SRTM error was carried out by Rodriguez et al (2006), but this did not explicitly quantify the spatial structure of vertical errors in the DEM, and nor did it distinguish between errors over different types of landscape. As a result, there is a lack of information about spatial structure of vertical errors of the SRTM in the landscape that matters most to flood models - the floodplain. Therefore, this study attempts this task by comparing SRTM, an error corrected SRTM product (The MERIT DEM of Yamazaki et al., 2017) and near truth LIDAR elevations for 3 deltaic floodplains (Mississippi, Po, Wax Lake) and a large lowland region (the Fens, UK). Using the error covariance function, calculated by comparing SRTM elevations to the near truth LIDAR, perturbations of the 90m SRTM DEM were generated, producing a catalogue of plausible DEMs. This allows modellers to simulate a suite of plausible DEMs at any aggregated block size above native SRTM resolution. Finally, the generated DEM's were input into a hydrodynamic model of the Mekong Delta, built using the LISFLOOD-FP hydrodynamic model, to assess how DEM error affects the hydrodynamics and inundation extent across the domain. The end product of this is an inundation map with the probability of each pixel being flooded based on the catalogue of DEMs. In a world of increasing computer power, but a lack of detailed datasets, this powerful approach can be used throughout natural hazard modelling to understand how errors in the SRTM DEM can impact the hazard assessment.

  19. Radiometric cross-calibration of the Terra MODIS and Landsat 7 ETM+ using an invariant desert site

    USGS Publications Warehouse

    Choi, T.; Angal, A.; Chander, G.; Xiong, X.

    2008-01-01

    A methodology for long-term radiometric cross-calibration between the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors was developed. The approach involves calibration of near-simultaneous surface observations between 2000 and 2007. Fifty-seven cloud-free image pairs were carefully selected over the Libyan desert for this study. The Libyan desert site (+28.55??, +23.39??), located in northern Africa, is a high reflectance site with high spatial, spectral, and temporal uniformity. Because the test site covers about 12 kmx13 km, accurate geometric preprocessing is required to match the footprint size between the two sensors to avoid uncertainties due to residual image misregistration. MODIS Level IB radiometrically corrected products were reprojected to the corresponding ETM+ image's Universal Transverse Mercator (UTM) grid projection. The 30 m pixels from the ETM+ images were aggregated to match the MODIS spatial resolution (250 m in Bands 1 and 2, or 500 m in Bands 3 to 7). The image data from both sensors were converted to absolute units of at-sensor radiance and top-ofatmosphere (TOA) reflectance for the spectrally matching band pairs. For each band pair, a set of fitted coefficients (slope and offset) is provided to quantify the relationships between the testing sensors. This work focuses on long-term stability and correlation of the Terra MODIS and L7 ETM+ sensors using absolute calibration results over the entire mission of the two sensors. Possible uncertainties are also discussed such as spectral differences in matching band pairs, solar zenith angle change during a collection, and differences in solar irradiance models.

  20. The proportionality of global warming to cumulative carbon emissions.

    PubMed

    Matthews, H Damon; Gillett, Nathan P; Stott, Peter A; Zickfeld, Kirsten

    2009-06-11

    The global temperature response to increasing atmospheric CO(2) is often quantified by metrics such as equilibrium climate sensitivity and transient climate response. These approaches, however, do not account for carbon cycle feedbacks and therefore do not fully represent the net response of the Earth system to anthropogenic CO(2) emissions. Climate-carbon modelling experiments have shown that: (1) the warming per unit CO(2) emitted does not depend on the background CO(2) concentration; (2) the total allowable emissions for climate stabilization do not depend on the timing of those emissions; and (3) the temperature response to a pulse of CO(2) is approximately constant on timescales of decades to centuries. Here we generalize these results and show that the carbon-climate response (CCR), defined as the ratio of temperature change to cumulative carbon emissions, is approximately independent of both the atmospheric CO(2) concentration and its rate of change on these timescales. From observational constraints, we estimate CCR to be in the range 1.0-2.1 degrees C per trillion tonnes of carbon (Tt C) emitted (5th to 95th percentiles), consistent with twenty-first-century CCR values simulated by climate-carbon models. Uncertainty in land-use CO(2) emissions and aerosol forcing, however, means that higher observationally constrained values cannot be excluded. The CCR, when evaluated from climate-carbon models under idealized conditions, represents a simple yet robust metric for comparing models, which aggregates both climate feedbacks and carbon cycle feedbacks. CCR is also likely to be a useful concept for climate change mitigation and policy; by combining the uncertainties associated with climate sensitivity, carbon sinks and climate-carbon feedbacks into a single quantity, the CCR allows CO(2)-induced global mean temperature change to be inferred directly from cumulative carbon emissions.

  1. Quantifying uncertainty in the measurement of arsenic in suspended particulate matter by Atomic Absorption Spectrometry with hydride generator

    PubMed Central

    2011-01-01

    Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM) is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India) for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG). In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2). PMID:21466671

  2. Quantification of uncertainty in first-principles predicted mechanical properties of solids: Application to solid ion conductors

    NASA Astrophysics Data System (ADS)

    Ahmad, Zeeshan; Viswanathan, Venkatasubramanian

    2016-08-01

    Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.

  3. Predicting carbon benefits from climate-smart agriculture: High-resolution carbon mapping and uncertainty assessment in El Salvador.

    PubMed

    Kearney, Sean Patrick; Coops, Nicholas C; Chan, Kai M A; Fonte, Steven J; Siles, Pablo; Smukler, Sean M

    2017-11-01

    Agroforestry management in smallholder agriculture can provide climate change mitigation and adaptation benefits and has been promoted as 'climate-smart agriculture' (CSA), yet has generally been left out of international and voluntary carbon (C) mitigation agreements. A key reason for this omission is the cost and uncertainty of monitoring C at the farm scale in heterogeneous smallholder landscapes. A largely overlooked alternative is to monitor C at more aggregated scales and develop C contracts with groups of land owners, community organizations or C aggregators working across entire landscapes (e.g., watersheds, communities, municipalities, etc.). In this study we use a 100-km 2 agricultural area in El Salvador to demonstrate how high-spatial resolution optical satellite imagery can be used to map aboveground woody biomass (AGWB) C at the landscape scale with very low uncertainty (95% probability of a deviation of less than 1%). Uncertainty of AGWB-C estimates remained low (<5%) for areas as small as 250 ha, despite high uncertainties at the farm and plot scale (34-99%). We estimate that CSA adoption could more than double AGWB-C stocks on agricultural lands in the study area, and that utilizing AGWB-C maps to target denuded areas could increase C gains per unit area by 46%. The potential value of C credits under a plausible adoption scenario would range from $38,270 to $354,000 yr -1 for the study area, or about $13 to $124 ha -1  yr -1 , depending on C prices. Considering farm sizes in smallholder landscapes rarely exceed 1-2 ha, relying solely on direct C payments to farmers may not lead to widespread CSA adoption, especially if farm-scale monitoring is required. Instead, landscape-scale approaches to C contracting, supported by satellite-based monitoring methods such as ours, could be a key strategy to reduce costs and uncertainty of C monitoring in heterogeneous smallholder landscapes, thereby incentivizing more widespread CSA adoption. Copyright © 2017. Published by Elsevier Ltd.

  4. Global Aerosol Direct Radiative Effect From CALIOP and C3M

    NASA Technical Reports Server (NTRS)

    Winker, Dave; Kato, Seiji; Tackett, Jason

    2015-01-01

    Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.

  5. Uncertainty in Analyzed Water and Energy Budgets at Continental Scales

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Mocko, D.; Chen, J.

    2011-01-01

    Operational analyses and retrospective-analyses provide all the physical terms of mater and energy budgets, guided by the assimilation of atmospheric observations. However, there is significant reliance on the numerical models, and so, uncertainty in the budget terms is always present. Here, we use a recently developed data set consisting of a mix of 10 analyses (both operational and retrospective) to quantify the uncertainty of analyzed water and energy budget terms for GEWEX continental-scale regions, following the evaluation of Dr. John Roads using individual reanalyses data sets.

  6. Global Aerosol Direct Radiative Effect from CALIOP and C3M

    NASA Technical Reports Server (NTRS)

    Winker, Dave; Kato, Seiji; Tackett, Jason

    2015-01-01

    Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.

  7. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  8. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for internal dosimetry. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.

    1998-04-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less

  9. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation

    DOE PAGES

    Lei, Huan; Yang, Xiu; Zheng, Bin; ...

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  10. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Breakout character of islet amyloid polypeptide hydrophobic mutations at the onset of type-2 diabetes

    NASA Astrophysics Data System (ADS)

    Frigori, Rafael B.

    2014-11-01

    Toxic fibrillar aggregates of islet amyloid polypeptide (IAPP) appear as the physical outcome of a peptidic phase transition signaling the onset of type-2 diabetes mellitus in different mammalian species. In particular, experimentally verified mutations on the amyloidogenic segment 20-29 in humans, cats, and rats are highly correlated with the molecular aggregation propensities. Through a microcanonical analysis of the aggregation of IAPP20 -29 isoforms, we show that a minimalist one-bead hydrophobic-polar continuum model for protein interactions properly quantifies those propensities from free-energy barriers. Our results highlight the central role of sequence-dependent hydrophobic mutations on hot spots for stabilization, and thus for the engineering, of such biological peptides.

  12. Quantifying catchment water balances and their uncertainties by expert elicitation

    NASA Astrophysics Data System (ADS)

    Sebok, Eva; Refsgaard, Jens Christian; Warmink, Jord J.; Stisen, Simon; Høgh Jensen, Karsten

    2017-04-01

    The increasing demand on water resources necessitates a more responsible and sustainable water management requiring a thorough understanding of hydrological processes both on small scale and on catchment scale. On catchment scale, the characterization of hydrological processes is often carried out by calculating a water balance based on the principle of mass conservation in hydrological fluxes. Assuming a perfect water balance closure and estimating one of these fluxes as a residual of the water balance is a common practice although this estimate will contain uncertainties related to uncertainties in the other components. Water balance closure on the catchment scale is also an issue in Denmark, thus, it was one of the research objectives of the HOBE hydrological observatory, that has been collecting data in the Skjern river catchment since 2008. Water balance components in the 1050 km2 Ahlergaarde catchment and the nested 120 km2 Holtum catchment, located in the glacial outwash plan of the Skjern catchment, were estimated using a multitude of methods. As the collected data enables the complex assessment of uncertainty of both the individual water balance components and catchment-scale water balances, the expert elicitation approach was chosen to integrate the results of the hydrological observatory. This approach relies on the subjective opinion of experts whose available knowledge and experience about the subject allows to integrate complex information from multiple sources. In this study 35 experts were involved in a multi-step elicitation process with the aim of (1) eliciting average annual values of water balance components for two nested catchments and quantifying the contribution of different sources of uncertainties to the total uncertainty in these average annual estimates; (2) calculating water balances for two catchments by reaching consensus among experts interacting in form of group discussions. To address the complex problem of water balance closure, the water balance was separated into five components: precipitation, evapotranspiration, surface runoff, recharge and subsurface outflow. During the study, experts first participated in individual interviews where they gave their opinion on the probability distribution of their water balance component of interest. The average annual values and uncertainty of water balance components and catchment-scale water balances were obtained at a later stage by reaching consensus during group discussions. The obtained water balance errors for the Ahlergaarde catchment and the Holtum catchment were -5 and -62 mm/yr, respectively, with an uncertainty of 66 and 86 mm/yr, respectively. As an advantage of the expert elicitation, drawing on the intuitive experience and capabilities of experts to assess complex, site-specific problems, not only the uncertainty of the water balance error was quantified, but the uncertainty of individual water balance components as well.

  13. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  14. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  15. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    NASA Astrophysics Data System (ADS)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.

  16. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  17. Quantification of uncertainties in the performance of smart composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1993-01-01

    A composite wing with spars, bulkheads, and built-in control devices is evaluated using a method for the probabilistic assessment of smart composite structures. Structural responses (such as change in angle of attack, vertical displacements, and stresses in regular plies with traditional materials and in control plies with mixed traditional and actuation materials) are probabilistically assessed to quantify their respective scatter. Probabilistic sensitivity factors are computed to identify those parameters that have a significant influence on a specific structural response. Results show that the uncertainties in the responses of smart composite structures can be quantified. Responses such as structural deformation, ply stresses, frequencies, and buckling loads in the presence of defects can be reliably controlled to satisfy specified design requirements.

  18. Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor

    NASA Astrophysics Data System (ADS)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.

  19. Optimizing the U.S. Electric System with a High Penetration of Renewables

    NASA Astrophysics Data System (ADS)

    Corcoran, B. A.; Jacobson, M. Z.

    2012-12-01

    As renewable energy generators are increasingly being installed throughout the U.S., there is growing interest in interconnecting diverse renewable generators (primarily wind and solar) across large geographic areas through an enhanced transmission system. This reduces variability in the aggregate power output, increases system reliability, and allows for the development of the best overall group of renewable technologies and sites to meet the load. Studies are therefore needed to determine the most efficient and economical plan to achieve large area interconnections in a future electric system with a high penetration of renewables. This research quantifies the effects of aggregating electric load and, separately, electric load together with diverse renewable generation throughout the ten Federal Energy Regulatory Commission (FERC) regions in the contiguous U.S. The effects of aggregating electric load alone -- including generator capacity capital cost savings, load energy shift operating cost savings, reserve requirement cost savings, and transmission costs -- were calculated for various groupings of FERC regions using 2006 data. Transmission costs outweighed cost savings due to aggregation in nearly all cases. East-west transmission layouts had the highest overall cost, and interconnecting ERCOT to adjacent FERC regions resulted in increased costs, both due to limited existing transmission capacity. Scenarios consisting of smaller aggregation groupings had the lowest overall cost. This analysis found no economic case for further aggregation of load alone within the U.S., except possibly in the West and Northwest. If aggregation of electric load is desired, then small, regional consolidations yield the lowest overall system cost. Next, the effects of aggregating electric load together with renewable electricity generation are being quantified through the development and use of an optimization tool in AMPL (A Mathematical Programming Language). This deterministic linear program solves for the least-cost organizational structure and system (generator, transmission, storage, and reserve requirements) for a highly renewable U.S. electric grid. The analysis will 1) examine a highly renewable 2006 electric system, and 2) create a "roadmap" from the existing 2006 system to a highly renewable system in 2030, accounting for projected price and demand changes and generator retirements based on age and environmental regulations. Ideally, results from this study will offer insight for a federal renewable energy policy (such as a renewable portfolio standard) and how to best organize regions for transmission planning.

  20. Uncertainty Assessment of Synthetic Design Hydrographs for Gauged and Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Brunner, Manuela I.; Sikorska, Anna E.; Furrer, Reinhard; Favre, Anne-Catherine

    2018-03-01

    Design hydrographs described by peak discharge, hydrograph volume, and hydrograph shape are essential for engineering tasks involving storage. Such design hydrographs are inherently uncertain as are classical flood estimates focusing on peak discharge only. Various sources of uncertainty contribute to the total uncertainty of synthetic design hydrographs for gauged and ungauged catchments. These comprise model uncertainties, sampling uncertainty, and uncertainty due to the choice of a regionalization method. A quantification of the uncertainties associated with flood estimates is essential for reliable decision making and allows for the identification of important uncertainty sources. We therefore propose an uncertainty assessment framework for the quantification of the uncertainty associated with synthetic design hydrographs. The framework is based on bootstrap simulations and consists of three levels of complexity. On the first level, we assess the uncertainty due to individual uncertainty sources. On the second level, we quantify the total uncertainty of design hydrographs for gauged catchments and the total uncertainty of regionalizing them to ungauged catchments but independently from the construction uncertainty. On the third level, we assess the coupled uncertainty of synthetic design hydrographs in ungauged catchments, jointly considering construction and regionalization uncertainty. We find that the most important sources of uncertainty in design hydrograph construction are the record length and the choice of the flood sampling strategy. The total uncertainty of design hydrographs in ungauged catchments depends on the catchment properties and is not negligible in our case.

  1. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  2. Induction of the Immunoproteasome Subunit Lmp7 Links Proteostasis and Immunity in α-Synuclein Aggregation Disorders.

    PubMed

    Ugras, Scott; Daniels, Malcolm J; Fazelinia, Hossein; Gould, Neal S; Yocum, Anastasia K; Luk, Kelvin C; Luna, Esteban; Ding, Hua; McKennan, Chris; Seeholzer, Steven; Martinez, Dan; Evans, Perry; Brown, Daniel; Duda, John E; Ischiropoulos, Harry

    2018-05-01

    Accumulation of aggregated α-synuclein into Lewy bodies is thought to contribute to the onset and progression of dopaminergic neuron degeneration in Parkinson's disease (PD) and related disorders. Although protein aggregation is associated with perturbation of proteostasis, how α-synuclein aggregation affects the brain proteome and signaling remains uncertain. In a mouse model of α-synuclein aggregation, 6% of 6215 proteins and 1.6% of 8183 phosphopeptides changed in abundance, indicating conservation of proteostasis and phosphorylation signaling. The proteomic analysis confirmed changes in abundance of proteins that regulate dopamine synthesis and transport, synaptic activity and integrity, and unearthed changes in mRNA binding, processing and protein translation. Phosphorylation signaling changes centered on axonal and synaptic cytoskeletal organization and structural integrity. Proteostatic responses included a significant increase in the levels of Lmp7, a component of the immunoproteasome. Increased Lmp7 levels and activity were also quantified in postmortem human brains with PD and dementia with Lewy bodies. Functionally, the immunoproteasome degrades α-synuclein aggregates and generates potentially antigenic peptides. Expression and activity of the immunoproteasome may represent testable targets to induce adaptive responses that maintain proteome integrity and modulate immune responses in protein aggregation disorders. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Quantifying confidence in density functional theory predictions of magnetic ground states

    NASA Astrophysics Data System (ADS)

    Houchins, Gregory; Viswanathan, Venkatasubramanian

    2017-10-01

    Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.

  4. Uncertainty in User-contributed Weather Data

    NASA Astrophysics Data System (ADS)

    Bell, S.; Cornford, D.; Bastin, L.; Molyneux, M.

    2012-04-01

    Websites such as Weather Underground and the Met Office's recently launched Weather Observations Website encourage members of the public to not only record meteorological observations for personal use but to upload them to a free online community to be shared and compared with data from hundreds of other weather stations in the UK alone. With such a concentration of freely available surface observations the question is whether it would be beneficial to incorporate this data into existing data assimilation schemes for constructing the initial conditions in Numerical Weather Prediction models. This question ultimately relates to how closely the amateur data represents reality, and how to quantify this uncertainty such that it may be accounted for when using the data. We will highlight factors that can lead to increased uncertainty. For instance as amateur data often comes with limited metadata it is difficult to assess whether an amateur station conforms to the strict guidelines and quality procedures that professional sites do. These guidelines relate to factors such as siting, exposure and calibration and in many cases it is practically impossible for amateur sites to conform to the guidelines due to a tendency for amateur sites to be located in enclosed urbanised areas. We will present exploratory research comparing amateur data from Weather Observations Website and Weather Underground against the Met Office's meteorological monitoring system which will be taken to represent the 'truth'. We are particularly aiming to identify bias in the amateur data and residual variances which will help to quantify our degree of uncertainty. The research will focus on 3 case periods, each with different synoptic conditions (clear skies, overcast, a frontal progression) and on observations of surface air temperature, precipitation, humidity. Future plans of the project will also be introduced such as further investigations into which factors lead to increased uncertainty, highlighting the importance of quantifying and accounting for their effects. Factors may include the degree of urbanisation around the site as well as those that may vary temporally such as the prevailing synoptic conditions. Will we also describe plans to take a Bayesian approach to assessing uncertainty and how this can be incorporated into data assimilation schemes.

  5. Use of wastes derived from earthquakes for the production of concrete masonry partition wall blocks.

    PubMed

    Xiao, Zhao; Ling, Tung-Chai; Kou, Shi-Cong; Wang, Qingyuan; Poon, Chi-Sun

    2011-08-01

    Utilization of construction and demolition (C&D) wastes as recycled aggregates in the production of concrete and concrete products have attracted much attention in recent years. However, the presence of large quantities of crushed clay brick in some the C&D waste streams (e.g. waste derived collapsed masonry buildings after an earthquake) renders the recycled aggregates unsuitable for high grade use. One possibility is to make use of the low grade recycled aggregates for concrete block production. In this paper, we report the results of a comprehensive study to assess the feasibility of using crushed clay brick as coarse and fine aggregates in concrete masonry block production. The effects of the content of crushed coarse and fine clay brick aggregates (CBA) on the mechanical properties of non-structural concrete block were quantified. From the experimental test results, it was observed that incorporating the crushed clay brick aggregates had a significant influence on the properties of blocks. The hardened density and drying shrinkage of the block specimens decreased with an increase in CBA content. The use of CBA increased the water absorption of block specimens. The results suggested that the amount of crushed clay brick to be used in concrete masonry blocks should be controlled at less than 25% (coarse aggregate) and within 50-75% for fine aggregates. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Subtle Effects of Aliphatic Alcohol Structure on Water Extraction and Solute Aggregation in Biphasic Water/ n -Dodecane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Andrew W.; Qiao, Baofu; Chiarizia, Renato

    Organic phase aggregation behavior of 1-octanol and its structural isomer, 2-ethylhexanol, in a biphasic n-dodecane water system is studied with a combination of physical measurement, small-angle X-ray scattering (SAXS), and atomistic molecular dynamic simulations. Physical properties of the organic phases are probed following their mixing and equilibration with immiscible water phases. Studies reveal that the interfacial tension decreases as a function of increasing alcohol concentration over the solubility range of the alcohol with no evidence for a critical aggregate concentration (cac). An uptake of water into the organic phases is quantified, as a function of alcohol content, by Karl Fischermore » titrations. The extraction of water into dodecane was further assessed as a function of alcohol concentration via the slope-analysis method sometimes employed in chemical separations. This provides a qualitative understanding of solute (water/alcohol) aggregation in the organic phase. The physical results are supported by analyses of SAXS data that reveals an emergence of aggregates in n-dodecane at elevated alcohol concentrations. The observed aggregate structure is dependent on the alcohol tail group geometry, consistent with surfactant packing parameter. The formation of these aggregates is discussed at a molecular level, where alcohol-alcohol and alcohol-water H-bonding interactions likely dominate the occurrence and morphology of the aggregates.« less

  7. Value assignment and uncertainty evaluation for single-element reference solutions

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha; Butler, Therese A.; Molloy, John L.; Winchester, Michael R.

    2018-06-01

    A Bayesian statistical procedure is proposed for value assignment and uncertainty evaluation for the mass fraction of the elemental analytes in single-element solutions distributed as NIST standard reference materials. The principal novelty that we describe is the use of information about relative differences observed historically between the measured values obtained via gravimetry and via high-performance inductively coupled plasma optical emission spectrometry, to quantify the uncertainty component attributable to between-method differences. This information is encapsulated in a prior probability distribution for the between-method uncertainty component, and it is then used, together with the information provided by current measurement data, to produce a probability distribution for the value of the measurand from which an estimate and evaluation of uncertainty are extracted using established statistical procedures.

  8. Quantifying Asphalt Emulsion-Based Chip Seal Curing Times Using Electrical Resistance Measurements.

    DOT National Transportation Integrated Search

    2017-04-15

    Chip sealing typically consists of covering a pavement surface with asphalt emulsion into which aggregate chips are embedded. The asphalt emulsion cures through the evaporation of water, thus providing mechanical strength to adhere to the pavement wh...

  9. The Effect Of Aggregate Angularity On Base Course Performance

    DOT National Transportation Integrated Search

    2001-09-01

    The Vermont Agency of Transportation (VAOT) conducted a two-phase study to quantify the resilient modulus and strength characteristics of its subbase material. In Phase 1, a literature review was done to determine the various methods available for in...

  10. INVESTIGATION OF TRANSFER OF FLUORESCENT TRACERS FROM SURFACES TO SKIN

    EPA Science Inventory

    Under the provisions of the Food Quality Protection Act (FQPA), aggregate exposure assessments must be conducted for pesticides proposed for registration. Many aspects of dermal exposure assessment remain poorly quantified. For purposes of assessing surface-to-skin transfers ...

  11. Possibility-induced simplified neutrosophic aggregation operators and their application to multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Liu, Peide

    2017-07-01

    Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.

  12. Morphological and semi-quantitative characteristics of diesel soot agglomerates emitted from commercial vehicles and a dynamometer.

    PubMed

    Luo, Chin-Hsiang; Lee, Whei-May; Liaw, Jiun-Jian

    2009-01-01

    Diesel soot aggregates emitted from a model dynamometer and 11 on-road vehicles were segregated by a micro-orifice uniform deposit impactor (MOUDI). The elemental contents and morphological parameters of the aggregates were then examined by scanning electron microscopy coupled with an energy dispersive spectrometer (SEM-EDS), and combined with a fractional Brownian motion (fBm) processor. Two mode-size distributions of aggregates collected from diesel vehicles were confirmed. Mean mass concentration of 339 mg/m3 (dC/dlogdp) existed in the dominant mode (180-320 nm). A relatively high proportion of these aggregates appeared in PM1, accentuating the relevance regarding adverse health effects. Furthermore, the fBm processor directly parameterized the SEM images of fractal like aggregates and successfully quantified surface texture to extract Hurst coefficients (H) of the aggregates. For aggregates from vehicles equipped with a universal cylinder number, the H value was independent of engine operational conditions. A small H value existed in emitted aggregates from vehicles with a large number of cylinders. This study found that aggregate fractal dimension related to H was in the range of 1.641-1.775, which is in agreement with values reported by previous TEM-based experiments. According to EDS analysis, carbon content ranged in a high level of 30%-50% by weight for diesel soot aggregates. The presence of Na and Mg elements in these sampled aggregates indicated the likelihood that some engine enhancers composed of biofuel or surfactants were commonly used in on-road vehicles in Taiwan. In particular, the morphological H combined with carbon content detection can be useful for characterizing chain-like or cluster diesel soot aggregates in the atmosphere.

  13. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  14. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy.

    PubMed

    Wahl, N; Hennig, P; Wieser, H P; Bangert, M

    2017-06-26

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  15. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  16. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  17. Short-term favorable weather conditions are an important control of interannual variability in carbon and water fluxes

    Treesearch

    Jakob Zscheischler; Simone Fatichi; Sebastian Wolf; Peter D. Blanken; Gil Bohrer; Ken Clark; Ankur R. Desai; David Hollinger; Trevor Keenan; Kimberly A. Novick; Sonia I. Seneviratne

    2016-01-01

    Ecosystem models often perform poorly in reproducing interannual variability in carbon and water fluxes, resulting in considerable uncertainty when estimating the land-carbon sink. While many aggregated variables (growing season length, seasonal precipitation, or temperature) have been suggested as predictors for interannual variability in carbon fluxes, their...

  18. Early regeneration response to aggregated overstory and harvest residue retention in Populus tremuloides (Michx.)-dominated forests

    Treesearch

    Miranda T. Curzon; Anthony W. D' Amato; Brian J. Palik

    2017-01-01

    Recent emphasis on increasing structural complexity and species diversity reflective of natural ecosystems through the use of retention harvesting approaches is coinciding with increased demand for forest-derived bioenergy feedstocks, largely sourced through the removal of harvest residues associated with whole-tree harvest. Uncertainties about the consequences of such...

  19. Comparison of different objective functions for parameterization of simple respiration models

    Treesearch

    M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson

    2008-01-01

    The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...

  20. Quantifying radar-rainfall uncertainties in urban drainage flow modelling

    NASA Astrophysics Data System (ADS)

    Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.

    2015-09-01

    This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.

  1. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    PubMed

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  2. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  3. Experimental validation of 2D uncertainty quantification for DIC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  4. Experimental validation of 2D uncertainty quantification for digital image correlation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  5. Epistemic uncertainties and natural hazard risk assessment - Part 1: A review of the issues

    NASA Astrophysics Data System (ADS)

    Beven, K. J.; Aspinall, W. P.; Bates, P. D.; Borgomeo, E.; Goda, K.; Hall, J. W.; Page, T.; Phillips, J. C.; Rougier, J. T.; Simpson, M.; Stephenson, D. B.; Smith, P. J.; Wagener, T.; Watson, M.

    2015-12-01

    Uncertainties in natural hazard risk assessment are generally dominated by the sources arising from lack of knowledge or understanding of the processes involved. There is a lack of knowledge about frequencies, process representations, parameters, present and future boundary conditions, consequences and impacts, and the meaning of observations in evaluating simulation models. These are the epistemic uncertainties that can be difficult to constrain, especially in terms of event or scenario probabilities, even as elicited probabilities rationalized on the basis of expert judgements. This paper reviews the issues raised by trying to quantify the effects of epistemic uncertainties. Such scientific uncertainties might have significant influence on decisions that are made for risk management, so it is important to communicate the meaning of an uncertainty estimate and to provide an audit trail of the assumptions on which it is based. Some suggestions for good practice in doing so are made.

  6. Stability of fluctuating and transient aggregates of amphiphilic solutes in aqueous binary mixtures: Studies of dimethylsulfoxide, ethanol, and tert-butyl alcohol

    NASA Astrophysics Data System (ADS)

    Banerjee, Saikat; Bagchi, Biman

    2013-10-01

    In aqueous binary mixtures, amphiphilic solutes such as dimethylsulfoxide (DMSO), ethanol, tert-butyl alcohol (TBA), etc., are known to form aggregates (or large clusters) at small to intermediate solute concentrations. These aggregates are transient in nature. Although the system remains homogeneous on macroscopic length and time scales, the microheterogeneous aggregation may profoundly affect the properties of the mixture in several distinct ways, particularly if the survival times of the aggregates are longer than density relaxation times of the binary liquid. Here we propose a theoretical scheme to quantify the lifetime and thus the stability of these microheterogeneous clusters, and apply the scheme to calculate the same for water-ethanol, water-DMSO, and water-TBA mixtures. We show that the lifetime of these clusters can range from less than a picosecond (ps) for ethanol clusters to few tens of ps for DMSO and TBA clusters. This helps explaining the absence of a strong composition dependent anomaly in water-ethanol mixtures but the presence of the same in water-DMSO and water-TBA mixtures.

  7. Quantification of alginate by aggregation induced by calcium ions and fluorescent polycations.

    PubMed

    Zheng, Hewen; Korendovych, Ivan V; Luk, Yan-Yeung

    2016-01-01

    For quantification of polysaccharides, including heparins and alginates, the commonly used carbazole assay involves hydrolysis of the polysaccharide to form a mixture of UV-active dye conjugate products. Here, we describe two efficient detection and quantification methods that make use of the negative charges of the alginate polymer and do not involve degradation of the targeted polysaccharide. The first method utilizes calcium ions to induce formation of hydrogel-like aggregates with alginate polymer; the aggregates can be quantified readily by staining with a crystal violet dye. This method does not require purification of alginate from the culture medium and can measure the large amount of alginate that is produced by a mucoid Pseudomonas aeruginosa culture. The second method employs polycations tethering a fluorescent dye to form suspension aggregates with the alginate polyanion. Encasing the fluorescent dye in the aggregates provides an increased scattering intensity with a sensitivity comparable to that of the conventional carbazole assay. Both approaches provide efficient methods for monitoring alginate production by mucoid P. aeruginosa. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Controlling ion aggregation and conduction in PEO-based ionomers.

    NASA Astrophysics Data System (ADS)

    Caldwell, David, II; Maranas, Janna

    2015-03-01

    PEO-based ionomers are ideal for reducing concentration polarization found in typical solid polymer electrolytes. This is achieved by binding the anion to the polymer backbone, significantly reducing the anions mobility. Ion aggregation is prevalent in these systems, but their influence on SPE performance is difficult to study experimentally. We present results of molecular dynamics simulations that explore the relationship between ion content and temperature on ion aggregation, polymer motion, and ion conduction. An unforeseen result of ionomers is the creation of string like aggregates that form conduction pathways in the amorphous region. These conduction pathways allow for a partial decoupling of ion conduction with polymer dynamics. The improvement in conductivity through the use of ion aggregates can be quantified by calculating the inverse of the Haven Ratio, dubbed f-value. Typical SPEs have an f-value less than 0.2, while the ionomers of study exhibit f-values near unity or higher. Understanding what properties influence the development and use of these conduction pathways will provide insight for further development of solid polymer electrolytes.

  9. Accounting for uncertainty in marine reserve design.

    PubMed

    Halpern, Benjamin S; Regan, Helen M; Possingham, Hugh P; McCarthy, Michael A

    2006-01-01

    Ecosystems and the species and communities within them are highly complex systems that defy predictions with any degree of certainty. Managing and conserving these systems in the face of uncertainty remains a daunting challenge, particularly with respect to developing networks of marine reserves. Here we review several modelling frameworks that explicitly acknowledge and incorporate uncertainty, and then use these methods to evaluate reserve spacing rules given increasing levels of uncertainty about larval dispersal distances. Our approach finds similar spacing rules as have been proposed elsewhere - roughly 20-200 km - but highlights several advantages provided by uncertainty modelling over more traditional approaches to developing these estimates. In particular, we argue that uncertainty modelling can allow for (1) an evaluation of the risk associated with any decision based on the assumed uncertainty; (2) a method for quantifying the costs and benefits of reducing uncertainty; and (3) a useful tool for communicating to stakeholders the challenges in managing highly uncertain systems. We also argue that incorporating rather than avoiding uncertainty will increase the chances of successfully achieving conservation and management goals.

  10. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  11. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  12. Simulating Fiber Ordering and Aggregation In Shear Flow Using Dissipative Particle Dynamics

    NASA Astrophysics Data System (ADS)

    Stimatze, Justin T.

    We have developed a mesoscale simulation of fiber aggregation in shear flow using LAMMPS and its implementation of dissipative particle dynamics. Understanding fiber aggregation in shear flow and flow-induced microstructural fiber networks is critical to our interest in high-performance composite materials. Dissipative particle dynamics enables the consideration of hydrodynamic interactions between fibers through the coarse-grained simulation of the matrix fluid. Correctly simulating hydrodynamic interactions and accounting for fluid forces on the microstructure is required to correctly model the shear-induced aggregation process. We are able to determine stresses, viscosity, and fiber forces while simulating the evolution of a model fiber system undergoing shear flow. Fiber-fiber contact interactions are approximated by combinations of common pairwise forces, allowing the exploration of interaction-influenced fiber behaviors such as aggregation and bundling. We are then able to quantify aggregate structure and effective volume fraction for a range of relevant system and fiber-fiber interaction parameters. Our simulations have demonstrated several aggregate types dependent on system parameters such as shear rate, short-range attractive forces, and a resistance to relative rotation while in contact. A resistance to relative rotation at fiber-fiber contact points has been found to strongly contribute to an increased angle between neighboring aggregated fibers and therefore an increase in average aggregate volume fraction. This increase in aggregate volume fraction is strongly correlated with a significant enhancement of system viscosity, leading us to hypothesize that controlling the resistance to relative rotation during manufacturing processes is important when optimizing for desired composite material characteristics.

  13. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa

    2018-05-01

    The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.

  14. Uncertainty Quantification in Climate Modeling and Projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Yun; Jackson, Charles; Giorgi, Filippo

    The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change informationmore » for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for assessing reliability and uncertainties of climate change information. An alternative approach is to generate similar ensembles by perturbing parameters within a single-model framework. One of workshop’s objectives was to give participants a deeper understanding of these approaches within a Bayesian statistical framework. However, there remain significant challenges still to be resolved before UQ can be applied in a convincing way to climate models and their projections.« less

  15. EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS

    EPA Science Inventory

    In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...

  16. Assessing the changes in land use and ecosystem services in an oasis agricultural region of Yanqi Basin, Northwest China.

    PubMed

    Wang, Shuixian; Wu, Bin; Yang, Pengnian

    2014-12-01

    The Yanqi Basin, one of the most productive agricultural areas, has a high population density in Xinjiang, Northwest China. Land use changes, mainly driven by oasis expansion, significantly impact ecosystem services and functions, but these effects are difficult to quantify. The valuation of ecosystem services is important to clarify the ecological and environmental changes caused by agriculturalization of oasis. This study aimed to investigate variations in ecosystem services in response to land use changes during oasis agricultural expansion activities in the Yanqi Basin from 1964 to 2009. The methods used were based on formula of ecosystem service value (ESV) and ESV coefficients. Satellite data were combined with the ESV coefficients to quantify land use changes and ecosystem service changes in the study area. Sensitivity analysis determined the effect of manipulating the coefficients on the estimated values. The results show that the total ESVs in the Yanqi Basin were $1,674, $1,692, $1,471, $1,732, and $1,603 million in 1964, 1973, 1989, 1999, and 2009, respectively. The net deline in ESV was $71 million in the past 46 years, but the ESVs of each types of landscape changed significantly. The aggregated ESVs of water areas and wetlands were approximately 80 % of the total ESV. Water supply and waste treatment were the two largest service functions and contributed approximately 65 % of the total ESV. The estimated ESVs in this study were elastic with respect to the value coefficients. Therefore, the estimations were robust in spite of uncertainties on the value coefficients. These significant changes in land use occur within the entire basin over the study period. These changes cause environmental problems, such as land degradation, vegetation degeneracy, and changes in aquatic environment.

  17. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  18. Uncertainty and Sensitivity of Direct Economic Flood Damages: the FloodRisk Free and Open-Source Software

    NASA Astrophysics Data System (ADS)

    Albano, R.; Sole, A.; Mancusi, L.; Cantisani, A.; Perrone, A.

    2017-12-01

    The considerable increase of flood damages in the the past decades has shifted in Europe the attention from protection against floods to managing flood risks. In this context, the expected damages assessment represents a crucial information within the overall flood risk management process. The present paper proposes an open source software, called FloodRisk, that is able to operatively support stakeholders in the decision making processes with a what-if approach by carrying out the rapid assessment of the flood consequences, in terms of direct economic damage and loss of human lives. The evaluation of the damage scenarios, trough the use of the GIS software proposed here, is essential for cost-benefit or multi-criteria analysis of risk mitigation alternatives. However, considering that quantitative assessment of flood damages scenarios is characterized by intrinsic uncertainty, a scheme has been developed to identify and quantify the role of the input parameters in the total uncertainty of flood loss model application in urban areas with mild terrain and complex topography. By the concept of parallel models, the contribution of different module and input parameters to the total uncertainty is quantified. The results of the present case study have exhibited a high epistemic uncertainty on the damage estimation module and, in particular, on the type and form of the utilized damage functions, which have been adapted and transferred from different geographic and socio-economic contexts because there aren't depth-damage functions that are specifically developed for Italy. Considering that uncertainty and sensitivity depend considerably on local characteristics, the epistemic uncertainty associated with the risk estimate is reduced by introducing additional information into the risk analysis. In the light of the obtained results, it is evident the need to produce and disseminate (open) data to develop micro-scale vulnerability curves. Moreover, the urgent need to push forward research into the implementation of methods and models for the assimilation of uncertainties in decision-making processes emerges.

  19. Use of NARCCAP data to characterize regional climate uncertainty in the impact of global climate change on large river fish population: Missouri River sturgeon example

    NASA Astrophysics Data System (ADS)

    Anderson, C. J.; Wildhaber, M. L.; Wikle, C. K.; Moran, E. H.; Franz, K. J.; Dey, R.

    2012-12-01

    Climate change operates over a broad range of spatial and temporal scales. Understanding the effects of change on ecosystems requires accounting for the propagation of information and uncertainty across these scales. For example, to understand potential climate change effects on fish populations in riverine ecosystems, climate conditions predicted by course-resolution atmosphere-ocean global climate models must first be translated to the regional climate scale. In turn, this regional information is used to force watershed models, which are used to force river condition models, which impact the population response. A critical challenge in such a multiscale modeling environment is to quantify sources of uncertainty given the highly nonlinear nature of interactions between climate variables and the individual organism. We use a hierarchical modeling approach for accommodating uncertainty in multiscale ecological impact studies. This framework allows for uncertainty due to system models, model parameter settings, and stochastic parameterizations. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. We use NARCCAP data to determine confidence the capability of climate models to simulate relevant processes and to quantify regional climate variability within the context of the hierarchical model of uncertainty quantification. By confidence, we mean the ability of the regional climate model to replicate observed mechanisms. We use the NCEP-driven simulations for this analysis. This provides a base from which regional change can be categorized as either a modification of previously observed mechanisms or emergence of new processes. The management implications for these categories of change are significantly different in that procedures to address impacts from existing processes may already be known and need adjustment; whereas, an emergent processes may require new management strategies. The results from hierarchical analysis of uncertainty are used to study the relative change in weights of the endangered Missouri River pallid sturgeon (Scaphirhynchus albus) under a 21st century climate scenario.

  20. Hierarchically-driven Approach for Quantifying Materials Uncertainty in Creep Deformation and Failure of Aerospace Materials

    DTIC Science & Technology

    2016-07-01

    characteristics and to examine the sensitivity of using such techniques for evaluating microstructure. In addition to the GUI tool, a manual describing its use has... Evaluating Local Primary Dendrite Arm Spacing Characterization Techniques Using Synthetic Directionally Solidified Dendritic Microstructures, Metallurgical and...driven approach for quanti - fying materials uncertainty in creep deformation and failure of aerspace materials, Multi-scale Structural Mechanics and

  1. Large contribution of natural aerosols to uncertainty in indirect forcing

    NASA Astrophysics Data System (ADS)

    Carslaw, K. S.; Lee, L. A.; Reddington, C. L.; Pringle, K. J.; Rap, A.; Forster, P. M.; Mann, G. W.; Spracklen, D. V.; Woodhouse, M. T.; Regayre, L. A.; Pierce, J. R.

    2013-11-01

    The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.

  2. Effects of model structural uncertainty on carbon cycle projections: biological nitrogen fixation as a case study

    NASA Astrophysics Data System (ADS)

    Wieder, William R.; Cleveland, Cory C.; Lawrence, David M.; Bonan, Gordon B.

    2015-04-01

    Uncertainties in terrestrial carbon (C) cycle projections increase uncertainty of potential climate feedbacks. Efforts to improve model performance often include increased representation of biogeochemical processes, such as coupled carbon-nitrogen (N) cycles. In doing so, models are becoming more complex, generating structural uncertainties in model form that reflect incomplete knowledge of how to represent underlying processes. Here, we explore structural uncertainties associated with biological nitrogen fixation (BNF) and quantify their effects on C cycle projections. We find that alternative plausible structures to represent BNF result in nearly equivalent terrestrial C fluxes and pools through the twentieth century, but the strength of the terrestrial C sink varies by nearly a third (50 Pg C) by the end of the twenty-first century under a business-as-usual climate change scenario representative concentration pathway 8.5. These results indicate that actual uncertainty in future C cycle projections may be larger than previously estimated, and this uncertainty will limit C cycle projections until model structures can be evaluated and refined.

  3. A framework for assessing the uncertainty in wave energy delivery to targeted subsurface formations

    NASA Astrophysics Data System (ADS)

    Karve, Pranav M.; Kallivokas, Loukas F.; Manuel, Lance

    2016-02-01

    Stress wave stimulation of geological formations has potential applications in petroleum engineering, hydro-geology, and environmental engineering. The stimulation can be applied using wave sources whose spatio-temporal characteristics are designed to focus the emitted wave energy into the target region. Typically, the design process involves numerical simulations of the underlying wave physics, and assumes a perfect knowledge of the material properties and the overall geometry of the geostructure. In practice, however, precise knowledge of the properties of the geological formations is elusive, and quantification of the reliability of a deterministic approach is crucial for evaluating the technical and economical feasibility of the design. In this article, we discuss a methodology that could be used to quantify the uncertainty in the wave energy delivery. We formulate the wave propagation problem for a two-dimensional, layered, isotropic, elastic solid truncated using hybrid perfectly-matched-layers (PMLs), and containing a target elastic or poroelastic inclusion. We define a wave motion metric to quantify the amount of the delivered wave energy. We, then, treat the material properties of the layers as random variables, and perform a first-order uncertainty analysis of the formation to compute the probabilities of failure to achieve threshold values of the motion metric. We illustrate the uncertainty quantification procedure using synthetic data.

  4. Simulating and explaining passive air sampling rates for semi-volatile compounds on polyurethane foam passive samplers

    PubMed Central

    Petrich, Nicholas T.; Spak, Scott N.; Carmichael, Gregory R.; Hu, Dingfei; Martinez, Andres; Hornbuckle, Keri C.

    2013-01-01

    Passive air samplers (PAS) including polyurethane foam (PUF) are widely deployed as an inexpensive and practical way to sample semi-volatile pollutants. However, concentration estimates from PAS rely on constant empirical mass transfer rates, which add unquantified uncertainties to concentrations. Here we present a method for modeling hourly sampling rates for semi-volatile compounds from hourly meteorology using first-principle chemistry, physics, and fluid dynamics, calibrated from depuration experiments. This approach quantifies and explains observed effects of meteorology on variability in compound-specific sampling rates and analyte concentrations; simulates nonlinear PUF uptake; and recovers synthetic hourly concentrations at a reference temperature. Sampling rates are evaluated for polychlorinated biphenyl congeners at a network of Harner model samplers in Chicago, Illinois during 2008, finding simulated average sampling rates within analytical uncertainty of those determined from loss of depuration compounds, and confirming quasi-linear uptake. Results indicate hourly, daily and interannual variability in sampling rates, sensitivity to temporal resolution in meteorology, and predictable volatility-based relationships between congeners. We quantify importance of each simulated process to sampling rates and mass transfer and assess uncertainty contributed by advection, molecular diffusion, volatilization, and flow regime within the PAS, finding PAS chamber temperature contributes the greatest variability to total process uncertainty (7.3%). PMID:23837599

  5. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  6. Glutathione-Capped Gold Nanoparticles-Based Photoacoustic Sensor for Label-Free Detection of Lead Ions

    NASA Astrophysics Data System (ADS)

    Shi, R.; Liu, X.-J.; Ying, Y.

    2017-07-01

    The photoacoustic signal generated by laser-induced nanobubbles (PA-LINB) proved to be a sensitive tool to monitor the aggregation of gold nanoparticles. Here, a simple and label-free photoacoustic method for the rapid detection of Pb2+ in the aqueous phase was developed. Due to the high affinity of Pb2+ ions to glutathione, the presence of Pb2+ led to the aggregation of glutathione-conjugated gold nanoparticles (GSH-GNPs). Hence, by measuring the variation of the PA-LINB signal after the aggregation of GSH-GNPs, Pb2+ can be quantified. A low detection limit for Pb2+ (42 nM) and a wide linear working range ( 42-1000 nM) were achieved. Furthermore, the proposed method showed good selectivity against other metal ions.

  7. Dynamics and mechanisms of asbestos-fiber aggregate growth in water

    NASA Astrophysics Data System (ADS)

    Wu, L.; Ortiz, C. P.; Jerolmack, D. J.

    2015-12-01

    Most colloidal particles including asbestos fibers form aggregates in water, when solution chemistry provides favorable conditions. To date, the growth of colloidal aggregates has been observed in many model systems under optical and scanning electron microscopy; however, all of these studies have used near-spherical particles. The highly elongated nature of asbestos fibers may cause anomalous aggregate growth and morphology, but this has never been examined. Although the exposure pathway of concern for asbestos is through the air, asbestos particles typically reside in soil that is at least partially saturated, and aggregates formed in the aqueous phase may influence the mobility of particles in the environment. Here we study solution-phase aggregation kinetics of asbestos fibers using a liquid-cell by in situ microscopy, over micron to centimeter length scales and from a tenth of a second to hours. We employ an elliptical particle tracking technique to determine particle trajectories and to quantify diffusivity. Experiments reveal that diffusing fibers join by cross linking, but that such linking is sometimes reversible. The resulting aggregates are very sparse and non-compact, with a fractal dimension that is lower than any previously reported value. Their morphology, growth rate and particle size distribution exhibit non-classical behavior that deviates significantly from observations of aggregates composed of near-spherical particles. We also perform experiments using synthetic colloidal particles, and compare these to asbestos in order to separate the controls of particle shape vs. material properties. This direct method for quantitatively observing aggregate growth is a first step toward predicting asbestos fiber aggregate size distributions in the environment. Moreover, many emerging environmental contaminants - such as carbon nanotubes - are elongated colloids, and our work suggests that theories for aggregate growth may need to be modified in order to model these particles.

  8. Assessing the dynamics of the upper soil layer relative to soil management practices

    NASA Astrophysics Data System (ADS)

    Hatfield, J.; Wacha, K.; Dold, C.

    2017-12-01

    The upper layer of the soil is the critical interface between the soil and the atmosphere and is the most dynamic in response to management practices. One of the soil properties most reflective to changes in management is the stability of the aggregates because this property controls infiltration of water and exchange of gases. An aggregation model has been developed based on the factors that control how aggregates form and the forces which degrade aggregates. One of the major factors for this model is the storage of carbon into the soil and the interaction with the soil biological component. To increase soil biology requires a stable microclimate that provides food, water, shelter, and oxygen which in turn facilitates the incorporation of organic material into forms that can be combined with soil particles to create stable aggregates. The processes that increase aggregate size and stability are directly linked the continual functioning of the biological component which in turn changes the physical and chemical properties of the soil. Soil aggregates begin to degrade as soon as there is no longer a supply of organic material into the soil. These processes can range from removal of organic material and excessive tillage. To increase aggregation of the upper soil layer requires a continual supply of organic material and the biological activity that incorporates organic material into substances that create a stable aggregate. Soils that exhibit stable soil aggregates at the surface have a prolonged infiltration rate with less runoff and a gas exchange that ensures adequate oxygen for maximum biological activity. Quantifying the dynamics of the soil surface layer provides a quantitative understanding of how management practices affect aggregate stability.

  9. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  10. Tsallis’ non-extensive free energy as a subjective value of an uncertain reward

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2009-03-01

    Recent studies in neuroeconomics and econophysics revealed the importance of reward expectation in decision under uncertainty. Behavioral neuroeconomic studies have proposed that the unpredictability and the probability of an uncertain reward are distinctly encoded as entropy and a distorted probability weight, respectively, in the separate neural systems. However, previous behavioral economic and decision-theoretic models could not quantify reward-seeking and uncertainty aversion in a theoretically consistent manner. In this paper, we have: (i) proposed that generalized Helmholtz free energy in Tsallis’ non-extensive thermostatistics can be utilized to quantify a perceived value of an uncertain reward, and (ii) empirically examined the explanatory powers of the models. Future study directions in neuroeconomics and econophysics by utilizing the Tsallis’ free energy model are discussed.

  11. Risk based adaptation of infrastructures to floods and storm surges induced by climate change.

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Garrè, Luca; Hansen, Peter Friis

    2014-05-01

    Coastal natural hazards are changing in frequency and intensity associated to climate change. These extreme events combined with an increase in the extent of vulnerable societies will lead to an increase of substantial monetary losses. For this reason, adaptive measures are required to identify the effective and adequate measures to withstand the impacts of climate change. Decision strategies are needed for the timing of investments and for the allocation of resources to safeguard the future in a sustainable manner. Adapting structures to climate change requires decision making under uncertainties. Therefore, it is vital that risk assessments are generated on a reliable and appropriate evaluation of the involved uncertainties. Linking a Bayesian network (BN) to a Geographic Information System (GIS) for a risk assessment enables to model all the relevant parameters, their causal relations and the involved uncertainties. The integration of the probabilistic approach into a GIS allows quantifying and visualizing uncertainties in a spatial manner. By addressing these uncertainties, the Bayesian Network approach allows quantifying their effects; and facilitates the identification of future model improvements and where other efforts should be concentrated. The final results can be applied as a supportive tool for presenting reliable risk assessments to decision-makers. Based on this premises, a case study was performed to assess how the storm surge magnitude and flooding extent of an event with similar characteristics to the Sandy Super storm will occur in 2050 and 2090.

  12. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  13. Uncertainty of Wheat Water Use: Simulated Patterns and Sensitivity to Temperature and CO2

    NASA Technical Reports Server (NTRS)

    Cammarano, Davide; Roetter, Reimund P.; Asseng, Senthold; Ewert, Frank; Wallach, Daniel; Martre, Pierre; Hatfield, Jerry L.; Jones, James W.; Rosenzweig, Cynthia E.; Ruane, Alex C.; hide

    2016-01-01

    Projected global warming and population growth will reduce future water availability for agriculture. Thus, it is essential to increase the efficiency in using water to ensure crop productivity. Quantifying crop water use (WU; i.e. actual evapotranspiration) is a critical step towards this goal. Here, sixteen wheat simulation models were used to quantify sources of model uncertainty and to estimate the relative changes and variability between models for simulated WU, water use efficiency (WUE, WU per unit of grain dry mass produced), transpiration efficiency (Teff, transpiration per kg of unit of grain yield dry mass produced), grain yield, crop transpiration and soil evaporation at increased temperatures and elevated atmospheric carbon dioxide concentrations ([CO2]). The greatest uncertainty in simulating water use, potential evapotranspiration, crop transpiration and soil evaporation was due to differences in how crop transpiration was modelled and accounted for 50 of the total variability among models. The simulation results for the sensitivity to temperature indicated that crop WU will decline with increasing temperature due to reduced growing seasons. The uncertainties in simulated crop WU, and in particularly due to uncertainties in simulating crop transpiration, were greater under conditions of increased temperatures and with high temperatures in combination with elevated atmospheric [CO2] concentrations. Hence the simulation of crop WU, and in particularly crop transpiration under higher temperature, needs to be improved and evaluated with field measurements before models can be used to simulate climate change impacts on future crop water demand.

  14. Constraining methane emissions from the Indo-Gangetic Plains and South Asia using combined surface and satellite data

    NASA Astrophysics Data System (ADS)

    Ganesan, A.; Lunt, M. F.; Rigby, M. L.; Chatterjee, A.; Boesch, H.; Parker, R.; Prinn, R. G.; van der Schoot, M. V.; Krummel, P. B.; Tiwari, Y. K.; Mukai, H.; Machida, T.; Terao, Y.; Nomura, S.; Patra, P. K.

    2015-12-01

    We present an analysis of the regional methane (CH4) budget from South Asia, using new measurements and new modelling techniques. South Asia contains some of the largest anthropogenic CH4 sources in the world, mainly from rice agriculture and ruminants. However, emissions from this region have been highly uncertain largely due to insufficient constraints from atmospheric measurements. Compared to parts of the developed world, which have well-developed monitoring networks, South Asia is very under-sampled, particularly given its importance to the global CH4 budget. Over the past few years, data have been collected from a variety of surface sites around the region, ranging from in situ to flask-based sampling. We have used these data, in conjunction with column methane data from the GOSAT satellite, to quantify emissions at a regional scale. Using the Met Office's Lagrangian NAME model, we calculated sensitivities to surface fluxes at 12 km resolution, allowing us to simulate the high-resolution impacts of emissions on concentrations. In addition, we used a newly developed hierarchical Bayesian inverse estimation scheme to estimate regional fluxes over the period of 2012-2014 in addition to ancillary "hyper-parameters" that characterize uncertainties in the system. Through this novel approach, we have characterized the effect of "aggregation" errors, model uncertainties as well as the effects of correlated errors when using regional measurement networks. We have also assessed the effects of biases on the GOSAT CH4 retrievals, which has been made possible for the first time for this region through the expanded surface measurements. In this talk, we will discuss a) regional CH4 fluxes from South Asia, with a particular focus on the densely populated Indo-Gangetic Plains b) derived model uncertainties, including the effects of correlated errors c) the impacts of combining surface and satellite data for emissions estimation in regions where poor satellite validation exists and d) the challenges in estimating emissions for regions of the world with a sparse measurement network.

  15. Interannual Variability of Tropical Rainfall as Seen From TRMM

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.

    2005-01-01

    Considerable uncertainty surrounds the issue of whether precipitation over the tropical oceans (30deg N/S) systematically changes with interannual sea-surface temperature (SST) anomalies that accompany El Nino (warm) and La Nina (cold) events. Although it is well documented that El Nino-Southern Oscillation (ENSO) events with marked SST changes over the tropical oceans produce significant regional changes in precipitation, water vapor, and radiative fluxes in the tropics, we still cannot yet adequately quantify the associated net integrated changes to water and heat balance over the entire tropical oceanic or land sectors. Resolving this uncertainty is important since precipitation and latent heat release variations over land and ocean sectors are key components of the tropical heat balance in its most aggregated form. Rainfall estimates from the Version 5 Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) averaged over the tropical oceans have not solved this issue and, in fact, show marked differences with estimates from two TRMM Microwave Imager (TMI) passive microwave algorithms. In this paper we will focus on findings that uncertainties in microphysical assumptions necessitated by the single-frequency PR measurement pose difficulties for detecting climate-related precipitation signals. Recent work has shown that path-integrated attenuation derived from the effects of precipitation on the radar return from the ocean surface exhibits interannual variability that agrees closely with the TMI time series, yet the PR rainfall interannual variability (and attenuation derived predominantly from reflectivity) differs even in sign. We will explore these apparent inconsistencies and examine changes in new TRMM Version 6 retrievals. To place these results in a tropical water balance perspective we also examine interannual variations in evaporation over the tropical oceans made from TRMM and SSM/I (Special Sensor Microwave Imager) measurements of surface winds and humidity. Evaporation estimates from reanalysis and several global model experiments will also be compared to the TRMM findings and evaluated for consistency. The ability to detect regional shifts in freshwater flux over the oceans (equivalently, integrated moisture convergence) and moisture transport will be discussed.

  16. The value of redundant measurements - highlights from AmeriFlux site visits using a portable eddy covariance system

    NASA Astrophysics Data System (ADS)

    Chan, S.; Billesbach, D. P.; Hanson, C. V.; Dengel, S.; Polonik, P.; Biraud, S.

    2016-12-01

    The AmeriFlux network conducts independent site visits using a portable eddy covariance system (PECS). Short-term (<2 weeks), side-by-side comparisons enable the network to evaluate inter-comparability between sites, improve data quality, and assess measurement uncertainty across the network. The PECS includes commonly used sensors for turbulent flux, radiation, and meteorological measurements which are maintained and calibrated using established best practices at levels at or above the manufacturer's recommendations. The importance of site visits was realized at the inception of the AmeriFlux network with the first site visit in 1997. Since that time, more than 180 site visits at over 120 different sites have been conducted. Site visit reports over the years have led to many key findings and important advances within the flux community which are highlighted in the presentation. Furthermore, we summarize and synthesize results from recent site comparisons that were conducted with the latest generation of the PECS (2013-present). The presentation quantifies observed differences between the PECS and network sites for key flux, radiation, and meteorological metrics. The aggregated comparisons provide insight into comparability amongst network sites as well as areas for improvement. We identify common errors and issues and discuss some best practices.

  17. Operator for object recognition and scene analysis by estimation of set occupancy with noisy and incomplete data sets

    NASA Astrophysics Data System (ADS)

    Rees, S. J.; Jones, Bryan F.

    1992-11-01

    Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.

  18. Criticality of iron and its principal alloying elements.

    PubMed

    Nuss, Philip; Harper, E M; Nassar, N T; Reck, Barbara K; Graedel, T E

    2014-04-01

    Because modern technology depends on reliable supplies of a wide variety of materials and because of increasing concern about those supplies, a comprehensive methodology was created to quantify the degree of criticality of the metals of the periodic table. In this paper, we apply this methodology to iron and several of its main alloying elements (i.e., vanadium, chromium, manganese, and niobium). These elements represent the basic metals of any industrial society and are vital for national security and economic well-being. Assessments relating to the dimensions of criticality - supply risk, vulnerability to supply restriction, and environmental implications - for 2008 are made on the global level and for the United States. Evaluations of each of the multiple indicators are presented, with aggregate results plotted in "criticality space", together with Monte Carlo simulation-derived "uncertainty cloud" estimates. Iron has the lowest supply risk, primarily because of its widespread geological occurrence. Vanadium displays the highest cradle-to-gate environmental implications, followed by niobium, chromium, manganese, and iron. Chromium and manganese, both essential in steel making, display the highest vulnerability to supply restriction, largely because substitution or substitution at equal performance is not possible for all end-uses. From a comprehensive perspective, we regard the overall criticality as low for iron and modest for the alloying elements we evaluated.

  19. Optimal Objective-Based Experimental Design for Uncertain Dynamical Gene Networks with Experimental Error.

    PubMed

    Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R

    2018-01-01

    In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.

  20. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

Top