Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.
Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne
2016-04-01
We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed
NASA Astrophysics Data System (ADS)
Kumar, V.; Sen, S.
2016-12-01
Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.
2016-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
Methods and Tools for Evaluating Uncertainty in Ecological Models: A Survey
Poster presented at the Ecological Society of America Meeting. Ecologists are familiar with a variety of uncertainty techniques, particularly in the intersection of maximum likelihood parameter estimation and Monte Carlo analysis techniques, as well as a recent increase in Baye...
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
Huo, Ju; Zhang, Guiyang; Yang, Ming
2018-04-20
This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000 mm×3000 mm×4000 mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.
Intercomparison and Uncertainty Assessment of Nine Evapotranspiration Estimates Over South America
NASA Astrophysics Data System (ADS)
Sörensson, Anna A.; Ruscica, Romina C.
2018-04-01
This study examines the uncertainties and the representations of anomalies of a set of evapotranspiration products over climatologically distinct regions of South America. The products, coming from land surface models, reanalysis, and remote sensing, are chosen from sources that are readily available to the community of users. The results show that the spatial patterns of maximum uncertainty differ among metrics, with dry regions showing maximum relative uncertainties of annual mean evapotranspiration, while energy-limited regions present maximum uncertainties in the representation of the annual cycle and monsoon regions in the representation of anomalous conditions. Furthermore, it is found that land surface models driven by observed atmospheric fields detect meteorological and agricultural droughts in dry regions unequivocally. The remote sensing products employed do not distinguish all agricultural droughts and this could be attributed to the forcing net radiation. The study also highlights important characteristics of individual data sets and recommends users to include assessments of sensitivity to evapotranspiration data sets in their studies, depending on region and nature of study to be conducted.
Cross-Sectional And Longitudinal Uncertainty Propagation In Drinking Water Risk Assessment
NASA Astrophysics Data System (ADS)
Tesfamichael, A. A.; Jagath, K. J.
2004-12-01
Pesticide residues in drinking water can vary significantly from day to day. However, drinking water quality monitoring performed under the Safe Drinking Water Act (SDWA) at most community water systems (CWSs) is typically limited to four data points per year over a few years. Due to limited sampling, likely maximum residues may be underestimated in risk assessment. In this work, a statistical methodology is proposed to study the cross-sectional and longitudinal uncertainties in observed samples and their propagated effect in risk estimates. The methodology will be demonstrated using data from 16 CWSs across the US that have three independent databases of atrazine residue to estimate the uncertainty of risk in infants and children. The results showed that in 85% of the CWSs, chronic risks predicted with the proposed approach may be two- to four-folds higher than that predicted with the current approach, while intermediate risks may be two- to three-folds higher in 50% of the CWSs. In 12% of the CWSs, however, the proposed methodology showed a lower intermediate risk. A closed-form solution of propagated uncertainty will be developed to calculate the number of years (seasons) of water quality data and sampling frequency needed to reduce the uncertainty in risk estimates. In general, this methodology provided good insight into the importance of addressing uncertainty of observed water quality data and the need to predict likely maximum residues in risk assessment by considering propagation of uncertainties.
A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals
NASA Technical Reports Server (NTRS)
Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.
1988-01-01
This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, C.; Hanany, S.; Baccigalupi, C.
We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMBmore » B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.« less
NASA Technical Reports Server (NTRS)
Nagpal, Vinod K.
1988-01-01
The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.
For many water quality-impaired stream segments, streamflow and water quality monitoring sites are not available. Lack of available streamflow data at impaired ungauged sites leads to uncertainties in total maximum daily load (TMDL) estimation. We developed a technique to minimiz...
Probabilistic models in human sensorimotor control
Wolpert, Daniel M.
2009-01-01
Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R
2016-01-01
Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.
Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management
A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gervasio, V.; Kim, D. S.; Vienna, J. D.
Analyses were performed to evaluate the impacts of using the advanced glass models, constraints, and uncertainty descriptions on projected Hanford glass mass. The maximum allowable waste oxide loading (WOL) was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of immobilized high-level waste (IHLW) glass when no uncertainties were taken into account. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increasemore » in estimated glass mass of 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). The immobilized low-activity waste (ILAW) mass was predicted to be 282,350 MT without uncertainty and with waste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MT. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.« less
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
LensEnt2: Maximum-entropy weak lens reconstruction
NASA Astrophysics Data System (ADS)
Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.
2013-08-01
LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.
Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gervasio, Vivianaluxa; Vienna, John D.; Kim, Dong-Sang
Analyses were performed to evaluate the impacts of using the advanced glass models, constraints (Vienna et al. 2016), and uncertainty descriptions on projected Hanford glass mass. The maximum allowable WOL was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of IHLW glass when no uncertainties were taken into accound. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increase in estimatedmore » glass mass 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). ILAW mass was predicted to be 282,350 MT without uncertainty and with weaste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MTG. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.« less
Benefit-cost estimation for alternative drinking water maximum contaminant levels
NASA Astrophysics Data System (ADS)
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean
NASA Astrophysics Data System (ADS)
Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.
2016-04-01
Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Ebrahimkhani, Sadegh
2016-07-01
Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Allowable Trajectory Variations for Space Shuttle Orbiter Entry-Aeroheating CFD
NASA Technical Reports Server (NTRS)
Wood, William A.; Alter, Stephen J.
2008-01-01
Reynolds-number criteria are developed for acceptable variations in Space Shuttle Orbiter entry trajectories for use in computational aeroheating analyses. The criteria determine if an existing computational fluid dynamics solution for a particular trajectory can be extrapolated to a different trajectory. The criteria development begins by estimating uncertainties for seventeen types of computational aeroheating data, such as boundary layer thickness, at exact trajectory conditions. For each type of datum, the allowable uncertainty contribution due to trajectory variation is set to be half of the value of the estimated exact-trajectory uncertainty. Then, for the twelve highest-priority datum types, Reynolds-number relations between trajectory variation and output uncertainty are determined. From these relations the criteria are established for the maximum allowable trajectory variations. The most restrictive criterion allows a 25% variation in Reynolds number at constant Mach number between trajectories.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
Estimation of Pre-industrial Nitrous Oxide Emission from the Terrestrial Biosphere
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Lu, C.; Zhang, B.; Pan, S.; Yang, J.
2015-12-01
Nitrous oxide (N2O) is currently the third most important greenhouse gases (GHG) after methane (CH4) and carbon dioxide (CO2). Global N2O emission increased substantially primarily due to reactive nitrogen (N) enrichment through fossil fuel combustion, fertilizer production, and legume crop cultivation etc. In order to understand how climate system is perturbed by anthropogenic N2O emissions from the terrestrial biosphere, it is necessary to better estimate the pre-industrial N2O emissions. Previous estimations of natural N2O emissions from the terrestrial biosphere range from 3.3-9.0 Tg N2O-N yr-1. This large uncertainty in the estimation of pre-industrial N2O emissions from the terrestrial biosphere may be caused by uncertainty associated with key parameters such as maximum nitrification and denitrification rates, half-saturation coefficients of soil ammonium and nitrate, N fixation rate, and maximum N uptake rate. In addition to the large estimation range, previous studies did not provide an estimate on preindustrial N2O emissions at regional and biome levels. In this study, we applied a process-based coupled biogeochemical model to estimate the magnitude and spatial patterns of pre-industrial N2O fluxes at biome and continental scales as driven by multiple input data, including pre-industrial climate data, atmospheric CO2 concentration, N deposition, N fixation, and land cover types and distributions. Uncertainty associated with key parameters is also evaluated. Finally, we generate sector-based estimates of pre-industrial N2O emission, which provides a reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere.
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Quantifying the uncertainty in heritability
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-01-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
2016-10-07
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
Effects of variability in probable maximum precipitation patterns on flood losses
NASA Astrophysics Data System (ADS)
Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul
2018-05-01
The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
Lenzuni, Paolo
2015-07-01
The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.
NASA Astrophysics Data System (ADS)
Lobuglio, Joseph N.; Characklis, Gregory W.; Serre, Marc L.
2007-03-01
Sparse monitoring data and error inherent in water quality models make the identification of waters not meeting regulatory standards uncertain. Additional monitoring can be implemented to reduce this uncertainty, but it is often expensive. These costs are currently a major concern, since developing total maximum daily loads, as mandated by the Clean Water Act, will require assessing tens of thousands of water bodies across the United States. This work uses the Bayesian maximum entropy (BME) method of modern geostatistics to integrate water quality monitoring data together with model predictions to provide improved estimates of water quality in a cost-effective manner. This information includes estimates of uncertainty and can be used to aid probabilistic-based decisions concerning the status of a water (i.e., impaired or not impaired) and the level of monitoring needed to characterize the water for regulatory purposes. This approach is applied to the Catawba River reservoir system in western North Carolina as a means of estimating seasonal chlorophyll a concentration. Mean concentration and confidence intervals for chlorophyll a are estimated for 66 reservoir segments over an 11-year period (726 values) based on 219 measured seasonal averages and 54 model predictions. Although the model predictions had a high degree of uncertainty, integration of modeling results via BME methods reduced the uncertainty associated with chlorophyll estimates compared with estimates made solely with information from monitoring efforts. Probabilistic predictions of future chlorophyll levels on one reservoir are used to illustrate the cost savings that can be achieved by less extensive and rigorous monitoring methods within the BME framework. While BME methods have been applied in several environmental contexts, employing these methods as a means of integrating monitoring and modeling results, as well as application of this approach to the assessment of surface water monitoring networks, represent unexplored areas of research.
Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis
NASA Astrophysics Data System (ADS)
Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles
2018-03-01
We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin
2016-03-01
Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.
NASA Astrophysics Data System (ADS)
Rypdal, Martin; Sirnes, Espen; Løvsletten, Ola; Rypdal, Kristoffer
2013-08-01
Maximum likelihood estimation techniques for multifractal processes are applied to high-frequency data in order to quantify intermittency in the fluctuations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter λ characterising the degree of volatility clustering. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of λ are lower at times of high market uncertainty.
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, D; Spaans, J; Kumaraswamy, L
Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Uncertainties in Estimates of the Risks of Late Effects from Space Radiation
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.
2002-01-01
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.
Estimation for the Linear Model With Uncertain Covariance Matrices
NASA Astrophysics Data System (ADS)
Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat
2014-03-01
We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai
2017-10-01
Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.
NASA Astrophysics Data System (ADS)
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2017-03-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
Best estimate of luminal cross-sectional area of coronary arteries from angiograms
NASA Technical Reports Server (NTRS)
Lee, P. L.; Selzer, R. H.
1988-01-01
We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.
NASA Technical Reports Server (NTRS)
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo
2010-02-01
Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
NASA Technical Reports Server (NTRS)
Strassberg, Gil; Scanlon, Bridget R.; Rodell, Matthew
2007-01-01
This study presents the first direct comparison of variations in seasonal GWS derived from GRACE TWS and simulated SM with GW-level measurements in a semiarid region. Results showed that variations in GWS and SM are the main sources controlling TWS changes over the High Plains, with negligible storage changes from surface water, snow, and biomass. Seasonal variations in GRACE TWS compare favorably with combined GWS from GW-level measurements (total 2,700 wells, average 1,050 GW-level measurements per season) and simulated SM from the Noah land surface model (R = 0.82, RMSD = 33 mm). Estimated uncertainty in seasonal GRACE-derived TWS is 8 mm, and estimated uncertainty in TWS changes is 11 mm. Estimated uncertainty in SM changes is 11 mm and combined uncertainty for TWS-SM changes is 15 mm. Seasonal TWS changes are detectable in 7 out of 9 monitored periods and maximum changes within a year (e.g. between winter and summer) are detectable in all 5 monitored periods. Grace-derived GWS calculated from TWS-SM generally agrees with estimates based on GW-level measurements (R = 0.58, RMSD = 33 mm). Seasonal TWS-SM changes are detectable in 5 out of the 9 monitored periods and maximum changes are detectable in all 5 monitored periods. Good correspondence between GRACE data and GW-level measurements from the intensively monitored High Plains aquifer validates the potential for using GRACE TWS and simulated SM to monitor GWS changes and aquifer depletion in semiarid regions subjected to intensive irrigation pumpage. This method can be used to monitor regions where large-scale aquifer depletion is ongoing, and in situ measurements are limited, such as the North China Plain or western India. This potential should be enhanced by future advances in GRACE processing, which will improve the spatial and temporal resolution of TWS changes, and will further increase applicability of GRACE data for monitoring GWS.
Zhang, H X
2008-01-01
An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.
2015-03-16
shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
Impact of measurement uncertainty from experimental load distribution factors on bridge load rating
NASA Astrophysics Data System (ADS)
Gangone, Michael V.; Whelan, Matthew J.
2018-03-01
Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zimmermann, N. E.; Poulter, B.
2015-11-01
Simulations of the spatial-temporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl dynamic global vegetation model (DGVM), and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland dataset can help to successfully delineate the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ∼ 10.3 Mkm2 (106 km2), with a mean annual maximum of ∼ 5.17 Mkm2 for 1980-2010. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.
Uncertainties in Projecting Risks of Late Effects from Space Radiation
NASA Astrophysics Data System (ADS)
Cucinotta, F.; Schimmerling, W.; Peterson, L.; Wilson, J.; Saganti, P.; Dicello, J.
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, CNS risks, and non - cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low -Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a maximum likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of the primary factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objectives, i.e., number of days in space without exceeding a given risk level within well defined confidence limits
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Gu, H.
2014-12-01
Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James Thomas
2004-04-01
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less
Hogrefe, Christian; Isukapalli, Sastry S.; Tang, Xiaogang; Georgopoulos, Panos G.; He, Shan; Zalewsky, Eric E.; Hao, Winston; Ku, Jia-Yeong; Key, Tonalee; Sistla, Gopal
2011-01-01
The role of emissions of volatile organic compounds and nitric oxide from biogenic sources is becoming increasingly important in regulatory air quality modeling as levels of anthropogenic emissions continue to decrease and stricter health-based air quality standards are being adopted. However, considerable uncertainties still exist in the current estimation methodologies for biogenic emissions. The impact of these uncertainties on ozone and fine particulate matter (PM2.5) levels for the eastern United States was studied, focusing on biogenic emissions estimates from two commonly used biogenic emission models, the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the Biogenic Emissions Inventory System (BEIS). Photochemical grid modeling simulations were performed for two scenarios: one reflecting present day conditions and the other reflecting a hypothetical future year with reductions in emissions of anthropogenic oxides of nitrogen (NOx). For ozone, the use of MEGAN emissions resulted in a higher ozone response to hypothetical anthropogenic NOx emission reductions compared with BEIS. Applying the current U.S. Environmental Protection Agency guidance on regulatory air quality modeling in conjunction with typical maximum ozone concentrations, the differences in estimated future year ozone design values (DVF) stemming from differences in biogenic emissions estimates were on the order of 4 parts per billion (ppb), corresponding to approximately 5% of the daily maximum 8-hr ozone National Ambient Air Quality Standard (NAAQS) of 75 ppb. For PM2.5, the differences were 0.1–0.25 μg/m3 in the summer total organic mass component of DVFs, corresponding to approximately 1–2% of the value of the annual PM2.5 NAAQS of 15 μg/m3. Spatial variations in the ozone and PM2.5 differences also reveal that the impacts of different biogenic emission estimates on ozone and PM2.5 levels are dependent on ambient levels of anthropogenic emissions. PMID:21305893
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
How a European network may help with estimating methane emissions on the French national scale
NASA Astrophysics Data System (ADS)
Pison, Isabelle; Berchet, Antoine; Saunois, Marielle; Bousquet, Philippe; Broquet, Grégoire; Conil, Sébastien; Delmotte, Marc; Ganesan, Anita; Laurent, Olivier; Martin, Damien; O'Doherty, Simon; Ramonet, Michel; Spain, T. Gerard; Vermeulen, Alex; Yver Kwok, Camille
2018-03-01
Methane emissions on the national scale in France in 2012 are inferred by assimilating continuous atmospheric mixing ratio measurements from nine stations of the European network ICOS located in France and surrounding countries. To assess the robustness of the fluxes deduced by our inversion system based on an objectified quantification of uncertainties, two complementary inversion set-ups are computed and analysed: (i) a regional run correcting for the spatial distribution of fluxes in France and (ii) a sectorial run correcting fluxes for activity sectors on the national scale. In addition, our results for the two set-ups are compared with fluxes produced in the framework of the inversion inter-comparison exercise of the InGOS project. The seasonal variability in fluxes is consistent between different set-ups, with maximum emissions in summer, likely due to agricultural activity. However, very high monthly posterior uncertainties (up to ≈ 65 to 74 % in the sectorial run in May and June) make it difficult to attribute maximum emissions to a specific sector. On the yearly and national scales, the two inversions range from 3835 to 4050 Gg CH4 and from 3570 to 4190 Gg CH4 for the regional and sectorial runs, respectively, consistently with the InGOS products. These estimates are 25 to 55 % higher than the total national emissions from bottom-up approaches (biogeochemical models from natural emissions, plus inventories for anthropogenic ones), consistently pointing at missing or underestimated sources in the inventories and/or in natural sources. More specifically, in the sectorial set-up, agricultural emissions are inferred as 66% larger than estimates reported to the UNFCCC. Uncertainties in the total annual national budget are 108 and 312 Gg CH4, i.e, 3 to 8 %, for the regional and sectorial runs respectively, smaller than uncertainties in available bottom-up products, proving the added value of top-down atmospheric inversions. Therefore, even though the surface network used in 2012 does not allow us to fully constrain all regions in France accurately, a regional inversion set-up makes it possible to provide estimates of French methane fluxes with an uncertainty in the total budget of less than 10 % on the yearly timescale. Additional sites deployed since 2012 would help to constrain French emissions on finer spatial and temporal scales and attributing missing emissions to specific sectors.
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
NASA Technical Reports Server (NTRS)
Szalay, Alexander S.; Jain, Bhuvnesh; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.
2003-01-01
We present measurements of parameters of the three-dimensional power spectrum of galaxy clustering from 222 square degrees of early imaging data in the Sloan Digital Sky Survey (SDSS). The projected galaxy distribution on the sky is expanded over a set of Karhunen-Loeve (KL) eigenfunctions, which optimize the signal-to-noise ratio in our analysis. A maximum likelihood analysis is used to estimate parameters that set the shape and amplitude of the three-dimensional power spectrum of galaxies in the SDSS magnitude-limited sample with r* less than 21. Our best estimates are gamma = 0.188 +/- 0.04 and sigma(sub 8L) = 0.915 +/- 0.06 (statistical errors only), for a flat universe with a cosmological constant. We demonstrate that our measurements contain signal from scales at or beyond the peak of the three-dimensional power spectrum. We discuss how the results scale with systematic uncertainties, like the radial selection function. We find that the central values satisfy the analytically estimated scaling relation. We have also explored the effects of evolutionary corrections, various truncations of the KL basis, seeing, sample size, and limiting magnitude. We find that the impact of most of these uncertainties stay within the 2 sigma uncertainties of our fiducial result.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
New Insights into the Estimation of Extreme Geomagnetic Storm Occurrences
NASA Astrophysics Data System (ADS)
Ruffenach, Alexis; Winter, Hugo; Lavraud, Benoit; Bernardara, Pietro
2017-04-01
Space weather events such as intense geomagnetic storms are major disturbances of the near-Earth environment that may lead to serious impacts on our modern society. As such, it is of great importance to estimate their probability, and in particular that of extreme events. One approach largely used in statistical sciences for extreme events probability estimates is Extreme Value Analysis (EVA). Using this rigorous statistical framework, estimations of the occurrence of extreme geomagnetic storms are performed here based on the most relevant global parameters related to geomagnetic storms, such as ground parameters (e.g. geomagnetic Dst and aa indexes), and space parameters related to the characteristics of Coronal Mass Ejections (CME) (velocity, southward magnetic field component, electric field). Using our fitted model, we estimate the annual probability of a Carrington-type event (Dst = -850nT) to be on the order of 10-3, with a lower limit of the uncertainties on the return period of ˜500 years. Our estimate is significantly higher than that of most past studies, which typically had a return period of a few 100 years at maximum. Thus precautions are required when extrapolating intense values. Currently, the complexity of the processes and the length of available data inevitably leads to significant uncertainties in return period estimates for the occurrence of extreme geomagnetic storms. However, our application of extreme value models for extrapolating into the tail of the distribution provides a mathematically justified framework for the estimation of extreme return periods, thereby enabling the determination of more accurate estimates and reduced associated uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.
2013-07-01
Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less
Influence of model reduction on uncertainty of flood inundation predictions
NASA Astrophysics Data System (ADS)
Romanowicz, R. J.; Kiczko, A.; Osuch, M.
2012-04-01
Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.
2011-01-01
Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.
Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.
Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V
2016-01-01
The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
NASA Astrophysics Data System (ADS)
Elishakoff, I.; Sarlin, N.
2016-06-01
In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.
NASA Astrophysics Data System (ADS)
Savani, N. P.; Vourlidas, A.; Richardson, I. G.; Szabo, A.; Thompson, B. J.; Pulkkinen, A.; Mays, M. L.; Nieves-Chinchilla, T.; Bothmer, V.
2017-02-01
This is a companion to Savani et al. (2015) that discussed how a first-order prediction of the internal magnetic field of a coronal mass ejection (CME) may be made from observations of its initial state at the Sun for space weather forecasting purposes (Bothmer-Schwenn scheme (BSS) model). For eight CME events, we investigate how uncertainties in their predicted magnetic structure influence predictions of the geomagnetic activity. We use an empirical relationship between the solar wind plasma drivers and Kp index together with the inferred magnetic vectors, to make a prediction of the time variation of Kp (Kp(BSS)). We find a 2σ uncertainty range on the magnetic field magnitude (|B|) provides a practical and convenient solution for predicting the uncertainty in geomagnetic storm strength. We also find the estimated CME velocity is a major source of error in the predicted maximum Kp. The time variation of Kp(BSS) is important for predicting periods of enhanced and maximum geomagnetic activity, driven by southerly directed magnetic fields, and periods of lower activity driven by northerly directed magnetic field. We compare the skill score of our model to a number of other forecasting models, including the NOAA/Space Weather Prediction Center (SWPC) and Community Coordinated Modeling Center (CCMC)/SWRC estimates. The BSS model was the most unbiased prediction model, while the other models predominately tended to significantly overforecast. The True skill score of the BSS prediction model (TSS = 0.43 ± 0.06) exceeds the results of two baseline models and the NOAA/SWPC forecast. The BSS model prediction performed equally with CCMC/SWRC predictions while demonstrating a lower uncertainty.
Is there another major constituent in the atmosphere of Mars?. [radiogenic argon
NASA Technical Reports Server (NTRS)
Wood, G. P.
1974-01-01
In view of the possible finding of several tens percent of inert gas in the atmosphere of Mars by an instrument on the descent module of the USSR's Mars 6 spacecraft, the likelihood of the correctness of this result was examined. The basis for the well-known fact that the most likely candidate is radiogenic argon is described. It is shown that, for the two important methods of investigating the atmosphere, earth-based CO2 is infrared absorption spectroscopy and S-band occultation, within the estimated 1 standard deviation uncertainties of these methods about 20% argon can be accommodated. Within the estimated 3 standard deviation uncertainties, more than 35% is possible. It is also stated that even with 35% argon the maximum value of heat transfer rate on the Viking 75 entry vehicle does not exceed the design value.
TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Phillips, J
2016-06-15
Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.
Climate data induced uncertainty in model-based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko
2017-06-01
Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due to climate data range and less so due to the apparent model sensitivity. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than apparent model sensitivity to forcing. Our study highlights the need to better constrain tropical climate, and demonstrates that uncertainty caused by climatic forcing data must be considered when comparing and evaluating carbon cycle model results and empirical datasets.
NASA Astrophysics Data System (ADS)
van der Wal, Wouter; Wu, Patrick; Sideris, Michael G.; Shum, C. K.
2008-10-01
Monthly geopotential spherical harmonic coefficients from the GRACE satellite mission are used to determine their usefulness and limitations for studying glacial isostatic adjustment (GIA) in North-America. Secular gravity rates are estimated by unweighted least-squares estimation using release 4 coefficients from August 2002 to August 2007 provided by the Center for Space Research (CSR), University of Texas. Smoothing is required to suppress short wavelength noise, in addition to filtering to diminish geographically correlated errors, as shown in previous studies. Optimal cut-off degrees and orders are determined for the destriping filter to maximize the signal to noise ratio. The halfwidth of the Gaussian filter is shown to significantly affect the sensitivity of the GRACE data (with respect to upper mantle viscosity and ice loading history). Therefore, the halfwidth should be selected based on the desired sensitivity. It is shown that increase in water storage in an area south west of Hudson Bay, from the summer of 2003 to the summer of 2006, contributes up to half of the maximum estimated gravity rate. Hydrology models differ in the predictions of the secular change in water storage, therefore even 4-year trend estimates are influenced by the uncertainty in water storage changes. Land ice melting in Greenland and Alaska has a non-negligible contribution, up to one-fourth of the maximum gravity rate. The estimated secular gravity rate shows two distinct peaks that can possibly be due to two domes in the former Pleistocene ice cover: west and south east of Hudson Bay. With a limited number of models, a better fit is obtained with models that use the ICE-3G model compared to the ICE-5G model. However, the uncertainty in interannual variations in hydrology models is too large to constrain the ice loading history with the current data span. For future work in which GRACE will be used to constrain ice loading history and the Earth's radial viscosity profile, it is important to include realistic uncertainty estimates for hydrology models and land ice melting in addition to the effects of lateral heterogeneity.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K.T.; Conrado, C.L.; Robison, W.L.
A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Motion compensation using origin ensembles in awake small animal positron emission tomography
NASA Astrophysics Data System (ADS)
Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.
2017-02-01
In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Gu, H.; Williams, C. A.
2017-12-01
Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.
Bayesian Methods for Effective Field Theories
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah
Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
NASA Astrophysics Data System (ADS)
Xu, Rong; Liu, Yongsheng
2016-12-01
The Emeishan large igneous province (ELIP) is renowned for its world-class Ni-Cu-(PGE) deposits and its link with the Capitanian mass extinction. The ELIP is generally thought to be associated with a deep mantle plume; however, evidence for such a model has been challenged through geology, geophysics and geochemistry. In many large igneous province settings, olivine-melt equilibrium thermometry has been used to argue for or against the existence of plumes. However, this method involves large uncertainties such as assumptions regarding melt compositions and crystallisation pressures. The Al-in-olivine thermometer avoids these uncertainties and is used here to estimate the temperatures of picrites in the ELIP. The calculated maximum temperature (1440 °C) is significantly ( 250 °C) higher than the Al-in-olivine temperature estimated for the average MORB, thus providing compelling evidence for the existence of thermal mantle plumes in the ELIP.
NASA Astrophysics Data System (ADS)
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Faris, A M; Wang, H-H; Tarone, A M; Grant, W E
2016-05-31
Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
NASA Astrophysics Data System (ADS)
Vianello, Giacomo
2018-05-01
Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (“on” measurement) is contrasted with a background-only observation free of the effect (“off” measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables. In this paper we study three other cases: (i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, (ii) the case where the background estimate b in the off measurement has an additional systematic uncertainty, and (iii) the case where b is a Gaussian random variable instead of a Poisson random variable. The latter case applies when b comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use a formula that is only valid when b is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short gamma-ray bursts and of new X-ray or γ-ray sources. All the techniques presented in this paper are made available in a Python code that is ready to use.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.
2013-10-01
Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Spaceborne Potential for Examining Taiga-Tundra Ecotone Form and Vulnerability
NASA Technical Reports Server (NTRS)
Montesano, Paul M.; Sun, Guoqing; Dubayah, Ralph O.; Ranson, K. Jon
2016-01-01
In the taiga-tundra ecotone (TTE), site-dependent forest structure characteristics can influence the subtle and heterogeneous structural changes that occur across the broad circumpolar extent. Such changes may be related to ecotone form, described by the horizontal and vertical patterns of forest structure (e.g., tree cover, density and height) within TTE forest patches, driven by local site conditions, and linked to ecotone dynamics. The unique circumstance of subtle, variable and widespread vegetation change warrants the application of spaceborne data including high-resolution (less than 5m) spaceborne imagery (HRSI) across broad scales for examining TTE form and predicting dynamics. This study analyzes forest structure at the patch-scale in the TTE to provide a means to examine both vertical and horizontal components of ecotone form. We demonstrate the potential of spaceborne data for integrating forest height and density to assess TTE form at the scale of forest patches across the circumpolar biome by (1) mapping forest patches in study sites along the TTE in northern Siberia with a multi-resolution suite of spaceborne data, and (2) examining the uncertainty of forest patch height from this suite of data across sites of primarily diffuse TTE forms. Results demonstrate the opportunities for improving patch-scale spaceborne estimates of forest height, the vertical component of TTE form, with HRSI. The distribution of relative maximum height uncertainty based on prediction intervals is centered at approximately 40%, constraining the use of height for discerning differences in forest patches. We discuss this uncertainty in light of a conceptual model of general ecotone forms, and highlight how the uncertainty of spaceborne estimates of height can contribute to the uncertainty in identifying TTE forms. A focus on reducing the uncertainty of height estimates in forest patches may improve depiction of TTE form, which may help explain variable forest responses in the TTE to climate change and the vulnerability of portions of the TTE to forest structure change. structural changes.
Spaceborne potential for examining taiga-tundra ecotone form and vulnerability
NASA Astrophysics Data System (ADS)
Montesano, Paul M.; Sun, Guoqing; Dubayah, Ralph O.; Ranson, K. Jon
2016-07-01
In the taiga-tundra ecotone (TTE), site-dependent forest structure characteristics can influence the subtle and heterogeneous structural changes that occur across the broad circumpolar extent. Such changes may be related to ecotone form, described by the horizontal and vertical patterns of forest structure (e.g., tree cover, density, and height) within TTE forest patches, driven by local site conditions, and linked to ecotone dynamics. The unique circumstance of subtle, variable, and widespread vegetation change warrants the application of spaceborne data including high-resolution (< 5 m) spaceborne imagery (HRSI) across broad scales for examining TTE form and predicting dynamics. This study analyzes forest structure at the patch scale in the TTE to provide a means to examine both vertical and horizontal components of ecotone form. We demonstrate the potential of spaceborne data for integrating forest height and density to assess TTE form at the scale of forest patches across the circumpolar biome by (1) mapping forest patches in study sites along the TTE in northern Siberia with a multi-resolution suite of spaceborne data and (2) examining the uncertainty of forest patch height from this suite of data across sites of primarily diffuse TTE forms. Results demonstrate the opportunities for improving patch-scale spaceborne estimates of forest height, the vertical component of TTE form, with HRSI. The distribution of relative maximum height uncertainty based on prediction intervals is centered at ˜ 40 %, constraining the use of height for discerning differences in forest patches. We discuss this uncertainty in light of a conceptual model of general ecotone forms and highlight how the uncertainty of spaceborne estimates of height can contribute to the uncertainty in identifying TTE forms. A focus on reducing the uncertainty of height estimates in forest patches may improve depiction of TTE form, which may help explain variable forest responses in the TTE to climate change and the vulnerability of portions of the TTE to forest structure change.
UDE-based control of variable-speed wind turbine systems
NASA Astrophysics Data System (ADS)
Ren, Beibei; Wang, Yeqin; Zhong, Qing-Chang
2017-01-01
In this paper, the control of a PMSG (permanent magnet synchronous generator)-based variable-speed wind turbine system with a back-to-back converter is considered. The uncertainty and disturbance estimator (UDE)-based control approach is applied to the regulation of the DC-link voltage and the control of the RSC (rotor-side converter) and the GSC (grid-side converter). For the rotor-side controller, the UDE-based vector control is developed for the RSC with PMSG control to facilitate the application of the MPPT (maximum power point tracking) algorithm for the maximum wind energy capture. For the grid-side controller, the UDE-based vector control is developed to control the GSC with the power reference generated by a UDE-based DC-link voltage controller. Compared with the conventional vector control, the UDE-based vector control can achieve reliable current decoupling control with fast response. Moreover, the UDE-based DC-link voltage regulation can achieve stable DC-link voltage under model uncertainties and external disturbances, e.g. wind speed variations. The effectiveness of the proposed UDE-based control approach is demonstrated through extensive simulation studies in the presence of coupled dynamics, model uncertainties and external disturbances under varying wind speeds. The UDE-based control is able to generate more energy, e.g. by 5% for the wind profile tested.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
Quantification of Uncertainty in the Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.
2017-12-01
Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
NASA Astrophysics Data System (ADS)
Bloembergen, Pieter; Dong, Wei; Bai, Cheng-Yu; Wang, Tie-Jun
2011-12-01
In this paper, impurity parameters m i and k i have been calculated for a range of impurities I as detected in the eutectics Co-C and Pt-C, by means of the software package Thermo-Calc within the ternary phase spaces Co-C- I and Pt-C- I. The choice of the impurities is based upon a selection out of the results of impurity analyses performed for a representative set of samples for each of the eutectics in study. The analyses in question are glow discharge mass spectrometry (GDMS) or inductively coupled plasma mass spectrometry (ICP-mass). Tables and plots of the impurity parameters against the atomic number Z i of the impurities will be presented, as well as plots demonstrating the validity of van't Hoff's law, the cornerstone to this study, for both eutectics. For the eutectics in question, the uncertainty u( T E - T liq ) in the correction T E - T liq will be derived, where T E and T liq refer to the transition temperature of the pure system and to the liquidus temperature in the limit of zero growth rate of the solid phase during solidification of the actual system, respectively. Uncertainty estimates based upon the current scheme SIE-OME, combining the sum of individual estimates (SIE) and the overall maximum estimate (OME) are compared with two alternative schemes proposed in this paper, designated as IE-IRE, combining individual estimates (IE) and individual random estimates (IRE), and the hybrid scheme SIE-IE-IRE, combining SIE, IE, and IRE.
Behr, W.M.; Rood, D.H.; Fletcher, K.E.; Guzman, N.; Finkel, R.; Hanks, T.C.; Hudnut, K.W.; Kendrick, K.J.; Platt, J.P.; Sharp, W.D.; Weldon, R.J.; Yule, J.D.
2010-01-01
This study focuses on uncertainties in estimates of the geologic slip rate along the Mission Creek strand of the southern San Andreas fault where it offsets an alluvial fan (T2) at Biskra Palms Oasis in southern California. We provide new estimates of the amount of fault offset of the T2 fan based on trench excavations and new cosmogenic 10Be age determinations from the tops of 12 boulders on the fan surface. We present three alternative fan offset models: a minimum, a maximum, and a preferred offset of 660 m, 980 m, and 770 m, respectively. We assign an age of between 45 and 54 ka to the T2 fan from the 10Be data, which is significantly older than previously reported but is consistent with both the degree of soil development associated with this surface, and with ages from U-series geochronology on pedogenic carbonate from T2, described in a companion paper by Fletcher et al. (this volume). These new constraints suggest a range of slip rates between ~12 and 22 mm/yr with a preferred estimate of ~14-17 mm/yr for the Mission Creek strand of the southern San Andreas fault. Previous studies suggested that the geologic and geodetic slip-rate estimates at Biskra Palms differed. We find, however, that considerable uncertainty affects both the geologic and geodetic slip-rate estimates, such that if a real discrepancy between these rates exists for the southern San Andreas fault at Biskra Palms, it cannot be demonstrated with available data. ?? 2010 Geological Society of America.
Evaluating the uncertainty of input quantities in measurement models
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Elster, Clemens
2014-06-01
The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in uncertainty propagation exercises. In this we deviate markedly and emphatically from the GUM Supplement 1, which gives pride of place to the Principle of Maximum Entropy as a means to assign probability distributions to input quantities.
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer
2006-01-01
During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.
DeWeber, Jefferson T; Wagner, Tyler
2018-06-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects. © 2018 John Wiley & Sons Ltd.
DeWeber, Jefferson T.; Wagner, Tyler
2018-01-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects.
Moore, J.L.; Runge, M.C.; Webber, B.L.; Wilson, J.R.U.
2011-01-01
Aim To identify whether eradication or containment is expected to be the most cost-effective management goal for an isolated invasive population when knowledge about the current extent is uncertain. Location Global and South Africa. Methods We developed a decision analysis framework to analyse the best management goal for an invasive species population (eradication, containment or take no action) when knowledge about the current extent is uncertain. We used value of information analysis to identify when investment in learning about the extent will improve this decision-making and tested the sensitivity of the conclusions to different parameters (e.g. spread rate, maximum extent, and management efficacy and cost). The model was applied to Acacia paradoxa DC, an Australian shrub with an estimated invasive extent of 310ha on Table Mountain, South Africa. Results Under the parameters used, attempting eradication is cost-effective for infestations of up to 777ha. However, if the invasion extent is poorly known, then attempting eradication is only cost-effective for infestations estimated as 296ha or smaller. The value of learning is greatest (maximum of 8% saving) when infestation extent is poorly known and if it is close to the maximum extent for which attempting eradication is optimal. The optimal management action is most sensitive to the probability that the action succeeds (which depends on the extent), with the discount rate and cost of management also important, but spread rate less so. Over a 20-year time-horizon, attempting to eradicate A. paradoxa from South Africa is predicted to cost on average ZAR 8 million if the extent is known, and if our current estimate is poor, ZAR 33.6 million as opposed to ZAR 32.8 million for attempting containment. Main conclusions Our framework evaluates the cost-effectiveness of attempting eradication or containment of an invasive population that takes uncertainty in population extent into account. We show that incorporating uncertainty in the analysis avoids overly optimistic beliefs about the effectiveness of management enabling better management decisions. For A. paradoxa in South Africa, attempting to eradicate is likely to be cost-effective, particularly if resources are allocated to better understand and improve management efficacy. ?? 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources
Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A.
2009-01-01
The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems
NASA Technical Reports Server (NTRS)
Lustig, P. H.; Holms, A. G.; Davison, H. W.
1973-01-01
The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Zupanski, Milija
2018-01-01
In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.
NASA Astrophysics Data System (ADS)
Zhuang, X. W.; Li, Y. P.; Nie, S.; Fan, Y. R.; Huang, G. H.
2018-01-01
An integrated simulation-optimization (ISO) approach is developed for assessing climate change impacts on water resources. In the ISO, uncertainties presented as both interval numbers and probability distributions can be reflected. Moreover, ISO permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised water-allocation targets are violated. A snowmelt-precipitation-driven watershed (Kaidu watershed) in northwest China is selected as the study case for demonstrating the applicability of the proposed method. Results of meteorological projections disclose that the incremental trend of temperature (e.g., minimum and maximum values) and precipitation exist. Results also reveal that (i) the system uncertainties would significantly affect water resources allocation pattern (including target and shortage); (ii) water shortage would be enhanced from 2016 to 2070; and (iii) the more the inflow amount decreases, the higher estimated water shortage rates are. The ISO method is useful for evaluating climate change impacts within a watershed system with complicated uncertainties and helping identify appropriate water resources management strategies hedging against drought.
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
Eeren, Hester V; Schawo, Saskia J; Scholte, Ron H J; Busschbach, Jan J V; Hakkaart, Leona
2015-01-01
To investigate whether a value of information analysis, commonly applied in health care evaluations, is feasible and meaningful in the field of crime prevention. Interventions aimed at reducing juvenile delinquency are increasingly being evaluated according to their cost-effectiveness. Results of cost-effectiveness models are subject to uncertainty in their cost and effect estimates. Further research can reduce that parameter uncertainty. The value of such further research can be estimated using a value of information analysis, as illustrated in the current study. We built upon an earlier published cost-effectiveness model that demonstrated the comparison of two interventions aimed at reducing juvenile delinquency. Outcomes were presented as costs per criminal activity free year. At a societal willingness-to-pay of €71,700 per criminal activity free year, further research to eliminate parameter uncertainty was valued at €176 million. Therefore, in this illustrative analysis, the value of information analysis determined that society should be willing to spend a maximum of €176 million in reducing decision uncertainty in the cost-effectiveness of the two interventions. Moreover, the results suggest that reducing uncertainty in some specific model parameters might be more valuable than in others. Using a value of information framework to assess the value of conducting further research in the field of crime prevention proved to be feasible. The results were meaningful and can be interpreted according to health care evaluation studies. This analysis can be helpful in justifying additional research funds to further inform the reimbursement decision in regard to interventions for juvenile delinquents.
Near-infrared scattering as a dust diagnostic
NASA Astrophysics Data System (ADS)
Saajasto, Mika; Juvela, Mika; Malinen, Johanna
2018-06-01
Context. Regarding the evolution of dust grains from diffuse regions of space to dense molecular cloud cores, many questions remain open. Scattering at near-infrared wavelengths, or "cloudshine", can provide information on cloud structure, dust properties, and the radiation field that is complementary to mid-infrared "coreshine" and observations of dust emission at longer wavelengths. Aims: We examine the possibility of using near-infrared scattering to constrain the local radiation field and the dust properties, the scattering and absorption efficiency, the size distribution of the grains, and the maximum grain size. Methods: We use radiative transfer modelling to examine the constraints provided by the J, H, and K bands in combination with mid-infrared surface brightness at 3.6 μm. We use spherical one-dimensional and elliptical three-dimensional cloud models to study the observable effects of different grain size distributions with varying absorption and scattering properties. As an example, we analyse observations of a molecular cloud in Taurus, TMC-1N. Results: The observed surface brightness ratios of the bands change when the dust properties are changed. However, even a change of ±10% in the surface brightness of one band changes the estimated power-law exponent of the size distribution γ by up to 30% and the estimated strength of the radiation field KISRF by up to 60%. The maximum grain size Amax and γ are always strongly anti-correlated. For example, overestimating the surface brightness by 10% changes the estimated radiation field strength by 20% and the exponent of the size distribution by 15%. The analysis of our synthetic observations indicates that the relative uncertainty of the parameter distributions are on average Amax, γ 25%, and the deviation between the estimated and correct values ΔQ < 15%. For the TMC-1N observations, a maximum grain size Amax > 1.5μm and a size distribution with γ > 4.0 have high probability. The mass weighted average grain size is ⟨am⟩ = 0.113μm. Conclusions: We show that scattered infrared light can be used to derive meaningful limits for the dust parameters. However, errors in the surface brightness data can result in considerable uncertainties on the derived parameters.
Getachew, Yehenew; Gotu, Butte; Enquselassie, Fikre
2010-10-01
Since early 1980s when AIDS was first recognized, there has been uncertainty about the future trend and the ultimate dimensions of the pandemic. This uncertainty persists because of difficulties in measuring HIV incidence and prevalence with a substantial degree of precision in a given population. One of the many factors for the lack of precision is the problem of obtaining representative data sources that can be extrapolated to the general population. National and regional HIV estimates for Ethiopia are derived from ANC based HIV surveillance data. Alternative data sources have not been exhaustively explored as potential tools to monitor the trend of HIV/ AIDS epidemic in the country. To estimate the magnitude and trend of HIV/AIDS epidemic using data from the routine VCT services as an alternative data source to ANC sentinel surveillance data. The study used secondary data sources from all government, private and NGO VCT centers, of the period of 2003-2005 in Addis Ababa. For the purpose of making comparative analysis of the VCT based estimations and projections, records of all five sentinel sites in Addis Ababa for the period 1983-2003 were reviewed. Both ANC and VCT data sources showed similar and regular trends from the beginning of the HIV epidemic till the year 1995 where the ANC showed a relatively higher prevalence rates than VCT data, with a maximum difference in HIV prevalence of 1.06% in 1993. However, a higher HIV prevalence was noted for the VCT than the ANC data source for the period of 1996-2002, with a maximum difference of 1.4% in 1998, the year when both the ANC and VCT modeled HIV prevalence reached the highest peak in Addis Ababa. On the contrary, the ANC based prevalence was higher than the VCT data for the period 2004-2010, with a maximum difference of 2.2%. This study suggests that VCT based HIV prevalence data closely approximates the ANC based data. Therefore VCT data source can be valuable to complement the ANC data in monitoring the HIV epidemic and trend.
Assessing the Impact of Laurentide Ice-sheet Topography on Glacial Climate
NASA Technical Reports Server (NTRS)
Ullman, D. J.; LeGrande, A. N.; Carlson, A. E.; Anslow, F. S.; Licciardi, J. M.
2014-01-01
Simulations of past climates require altered boundary conditions to account for known shifts in the Earth system. For the Last Glacial Maximum (LGM) and subsequent deglaciation, the existence of large Northern Hemisphere ice sheets caused profound changes in surface topography and albedo. While ice-sheet extent is fairly well known, numerous conflicting reconstructions of ice-sheet topography suggest that precision in this boundary condition is lacking. Here we use a high-resolution and oxygen-isotopeenabled fully coupled global circulation model (GCM) (GISS ModelE2-R), along with two different reconstructions of the Laurentide Ice Sheet (LIS) that provide maximum and minimum estimates of LIS elevation, to assess the range of climate variability in response to uncertainty in this boundary condition.We present this comparison at two equilibrium time slices: the LGM, when differences in ice-sheet topography are maximized, and 14 ka, when differences in maximum ice-sheet height are smaller but still exist. Overall, we find significant differences in the climate response to LIS topography, with the larger LIS resulting in enhanced Atlantic Meridional Overturning Circulation and warmer surface air temperatures, particularly over northeastern Asia and the North Pacific. These up- and downstream effects are associated with differences in the development of planetary waves in the upper atmosphere, with the larger LIS resulting in a weaker trough over northeastern Asia that leads to the warmer temperatures and decreased albedo from snow and sea-ice cover. Differences between the 14 ka simulations are similar in spatial extent but smaller in magnitude, suggesting that climate is responding primarily to the larger difference in maximum LIS elevation in the LGM simulations. These results suggest that such uncertainty in ice-sheet boundary conditions alone may significantly impact the results of paleoclimate simulations and their ability to successfully simulate past climates, with implications for estimating climate sensitivity to greenhouse gas forcing utilizing past climate states.
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.
2016-12-01
Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Comment on "Inference with minimal Gibbs free energy in information field theory".
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matney, J; Lian, J; Chera, B
2015-06-15
Introduction: Geometric uncertainties in daily patient setup can lead to variations in the planned dose, especially when using highly conformal techniques such as helical Tomotherapy. To account for the potential effect of geometric uncertainty, our clinical practice is to expand critical structures by 3mm expansion into planning risk volumes (PRV). The PRV concept assumes the spatial dose cloud is insensitive to patient positioning. However, no tools currently exist to determine if a Tomotherapy plan is robust to the effects of daily setup variation. We objectively quantified the impact of geometric uncertainties on the 3D doses to critical normal tissues duringmore » helical Tomotherapy. Methods: Using a Matlab-based program created and validated by Accuray (Madison, WI), the planned Tomotherapy delivery sinogram recalculated dose on shifted CT datasets. Ten head and neck patients were selected for analysis. To simulate setup uncertainty, the patient anatomy was shifted ±3mm in the longitudinal, lateral and vertical axes. For each potential shift, the recalculated doses to various critical normal tissues were compared to the doses delivered to the PRV in the original plan Results: 18 shifted scenarios created from Tomotherapy plans for three patients with head and neck cancers were analyzed. For all simulated setup errors, the maximum doses to the brainstem, spinal cord, parotids and cochlea were no greater than 0.6Gy of the respective original PRV maximum. Despite 3mm setup shifts, the minimum dose delivered to 95% of the CTVs and PTVs were always within 0.4Gy of the original plan. Conclusions: For head and neck sites treated with Tomotherapy, the use of a 3mm PRV expansion provide a reasonable estimate of the dosimetric effects of 3mm setup uncertainties. Similarly, target coverage appears minimally effected by a 3mm setup uncertainty. Data from a larger number of patients will be presented. Future work will include other anatomical sites.« less
NASA Astrophysics Data System (ADS)
Richey, J. N.; Flannery, J. A.; Toth, L. T.; Kuffner, I. B.; Poore, R. Z.
2017-12-01
The Sr/Ca in massive corals can be used as a proxy for sea surface temperature (SST) in shallow tropical to sub-tropical regions; however, the relationship between Sr/Ca and SST varies throughout the ocean, between different species of coral, and often between different colonies of the same species. We aimed to quantify the uncertainty associated with the Sr/Ca-SST proxy due to sample handling (e.g., micro-drilling or analytical error), vital effects (e.g., among-colony differences in coral growth), and local-scale variability in microhabitat. We examine the intra- and inter-colony reproducibility of Sr/Ca records extracted from five modern Orbicella faveolata colonies growing in the Dry Tortugas, Florida, USA. The average intra-colony absolute difference (AD) in Sr/Ca of the five colonies during an overlapping interval (1997-2008) was 0.055 ± 0.044 mmol mol-1 (0.96 ºC) and the average inter-colony Sr/Ca AD was 0.039 ± 0.01 mmol mol-1 (0.51 ºC). All available Sr/Ca-SST data pairs from 1997-2008 were combined and regressed against the HadISST1 gridded SST data set (24 ºN and 82 ºW) to produce a calibration equation that could be applied to O. faveolata specimens from throughout the Gulf of Mexico/Caribbean/Atlantic region after accounting for the potential uncertainties in Sr/Ca-derived SSTs. We quantified a combined error term for O. faveolata using the root-sum-square (RMS) of the analytical, intra-, and inter-colony uncertainties and suggest that an overall uncertainty of 0.046 mmol mol-1 (0.81 ºC, 1σ), should be used to interpret Sr/Ca records from O. faveolata specimens of unknown age or origin to reconstruct SST. We also explored how uncertainty is affected by the number of corals used in a reconstruction by iteratively calculating the RMS error for composite coral time-series using two, three, four, and five overlapping coral colonies. Our results indicate that maximum RMS error at the 95% confidence interval on mean annual SST estimates is 1.4 ºC when a composite record is made from only two overlapping coral Sr/Ca records. The uncertainty decreases as additional coral Sr/Ca data are added, with a maximum RMS error of 0.5 ºC on mean annual SST for a five-colony composite. To reduce uncertainty to under 1 ºC, it is best to use Sr/Ca from three or more coral colonies from the same geographic location and time period.
NASA Technical Reports Server (NTRS)
Achakulwisut, P.; Mickley, L. J.; Murray, Lee; Tai, A.P.K.; Kaplan, J.O.; Alexander, B.
2015-01-01
Current understanding of the factors controlling biogenic isoprene emissions and of the fate of isoprene oxidation products in the atmosphere has been evolving rapidly. We use a climate-biosphere-chemistry modeling framework to evaluate the sensitivity of estimates of the tropospheric oxidative capacity to uncertainties in isoprene emissions and photochemistry. Our work focuses on trends across two time horizons: from the Last Glacial Maximum (LGM, 21 000 years BP) to the preindustrial (1770s); and from the preindustrial to the present day (1990s). We find that different oxidants have different sensitivities to the uncertainties tested in this study, with OH being the most sensitive: changes in the global mean OH levels for the LGM-to-preindustrial transition range between -29 and +7, and those for the preindustrial-to-present day transition range between -8 and +17, across our simulations. Our results suggest that the observed glacial-interglacial variability in atmospheric methane concentrations is predominantly driven by changes in methane sources as opposed to changes in OH, the primary methane sink. However, the magnitudes of change are subject to uncertainties in the past isoprene global burdens, as are estimates of the change in the global burden of secondary organic aerosol (SOA) relative to the preindustrial. We show that the linear relationship between tropospheric mean OH and tropospheric mean ozone photolysis rates, water vapor, and total emissions of NOx and reactive carbon first reported in Murray et al. (2014) does not hold across all periods with the new isoprene photochemistry mechanism. Our results demonstrate that inadequacies in our understanding of present-day OH and its controlling factors must be addressed in order to improve model estimates of the oxidative capacity of past and present atmospheres.
Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Bonta, Dacian Viorel
Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.
Combining QMRA and Epidemiology to Estimate Campylobacteriosis Incidence.
Evers, Eric G; Bouwknegt, Martijn
2016-10-01
The disease burden of pathogens as estimated by QMRA (quantitative microbial risk assessment) and EA (epidemiological analysis) often differs considerably. This is an unsatisfactory situation for policymakers and scientists. We explored methods to obtain a unified estimate using campylobacteriosis in the Netherlands as an example, where previous work resulted in estimates of 4.9 million (QMRA) and 90,600 (EA) cases per year. Using the maximum likelihood approach and considering EA the gold standard, the QMRA model could produce the original EA estimate by adjusting mainly the dose-infection relationship. Considering QMRA the gold standard, the EA model could produce the original QMRA estimate by adjusting mainly the probability that a gastroenteritis case is caused by Campylobacter. A joint analysis of QMRA and EA data and models assuming identical outcomes, using a frequentist or Bayesian approach (using vague priors), resulted in estimates of 102,000 or 123,000 campylobacteriosis cases per year, respectively. These were close to the original EA estimate, and this will be related to the dissimilarity in data availability. The Bayesian approach further showed that attenuating the condition of equal outcomes immediately resulted in very different estimates of the number of campylobacteriosis cases per year and that using more informative priors had little effect on the results. In conclusion, EA was dominant in estimating the burden of campylobacteriosis in the Netherlands. However, it must be noted that only statistical uncertainties were taken into account here. Taking all, usually difficult to quantify, uncertainties into account might lead to a different conclusion. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Characterizing Protease Specificity: How Many Substrates Do We Need?
Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.
2015-01-01
Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682
Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador
NASA Astrophysics Data System (ADS)
Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.
2017-06-01
Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.
Estimating discharge in rivers using remotely sensed hydraulic information
Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.
2005-01-01
A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Transmission potential of Zika virus infection in the South Pacific.
Nishiura, Hiroshi; Kinoshita, Ryo; Mizumoto, Kenji; Yasuda, Yohei; Nah, Kyeongah
2016-04-01
Zika virus has spread internationally through countries in the South Pacific and Americas. The present study aimed to estimate the basic reproduction number, R0, of Zika virus infection as a measurement of the transmission potential, reanalyzing past epidemic data from the South Pacific. Incidence data from two epidemics, one on Yap Island, Federal State of Micronesia in 2007 and the other in French Polynesia in 2013-2014, were reanalyzed. R0 of Zika virus infection was estimated from the early exponential growth rate of these two epidemics. The maximum likelihood estimate (MLE) of R0 for the Yap Island epidemic was in the order of 4.3-5.8 with broad uncertainty bounds due to the small sample size of confirmed and probable cases. The MLE of R0 for French Polynesia based on syndromic data ranged from 1.8 to 2.0 with narrow uncertainty bounds. The transmissibility of Zika virus infection appears to be comparable to those of dengue and chikungunya viruses. Considering that Aedes species are a shared vector, this finding indicates that Zika virus replication within the vector is perhaps comparable to dengue and chikungunya. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Is my bottom-up uncertainty estimation on metal measurement adequate?
NASA Astrophysics Data System (ADS)
Marques, J. R.; Faustino, M. G.; Monteiro, L. R.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.
2018-03-01
Is the estimated uncertainty under GUM recommendation associated with metal measurement adequately estimated? How to evaluate if the measurement uncertainty really covers all uncertainty that is associated with the analytical procedure? Considering that, many laboratories frequently underestimate or less frequently overestimate uncertainties on its results; this paper presents the evaluation of estimated uncertainties on two ICP-OES procedures of seven metal measurements according to GUM approach. Horwitz function and proficiency tests scaled standard uncertainties were used in this evaluation. Our data shows that most elements expanded uncertainties were from two to four times underestimated. Possible causes and corrections are discussed herein.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Temperature Measurement in WTE Boilers Using Suction Pyrometers
Rinaldi, Fabio; Najafi, Behzad
2013-01-01
The temperature of the flue-gas in the post combustion zone of a waste to energy (WTE) plant has to be maintained within a fairly narrow range of values, the minimum of which is prescribed by the European Waste Directive 2000/76/CE, whereas the maximum value must be such as to ensure the preservation of the materials and the energy efficiency of the plant. A high degree of accuracy in measuring and controlling the aforementioned temperature is therefore required. In almost the totality of WTE plants this measurement process is carried out by using practical industrial thermometers, such as bare thermocouples and infrared radiation (IR) pyrometers, even if affected by different physical contributions which can make the gas temperature measurements incorrect. The objective of this paper is to analyze errors and uncertainties that can arise when using a bare thermocouple or an IR pyrometer in a WTE plant and to provide a method for the in situ calibration of these industrial sensors through the use of suction pyrometers. The paper describes principle of operation, design, and uncertainty contributions of suction pyrometers, it also provides the best estimation of the flue-gas temperature in the post combustion zone of a WTE plant and the estimation of its expanded uncertainty. PMID:24248279
Public health, climate, and economic impacts of desulfurizing jet fuel.
Barrett, Steven R H; Yim, Steve H L; Gilmore, Christopher K; Murray, Lee T; Kuhn, Stephen R; Tai, Amos P K; Yantosca, Robert M; Byun, Daewon W; Ngan, Fong; Li, Xiangshang; Levy, Jonathan I; Ashok, Akshay; Koo, Jamin; Wong, Hsin Min; Dessens, Olivier; Balasubramanian, Sathya; Fleming, Gregg G; Pearlson, Matthew N; Wollersheim, Christoph; Malina, Robert; Arunachalam, Saravanan; Binkowski, Francis S; Leibensperger, Eric M; Jacob, Daniel J; Hileman, James I; Waitz, Ian A
2012-04-17
In jurisdictions including the US and the EU ground transportation and marine fuels have recently been required to contain lower concentrations of sulfur, which has resulted in reduced atmospheric SO(x) emissions. In contrast, the maximum sulfur content of aviation fuel has remained unchanged at 3000 ppm (although sulfur levels average 600 ppm in practice). We assess the costs and benefits of a potential ultra-low sulfur (15 ppm) jet fuel standard ("ULSJ"). We estimate that global implementation of ULSJ will cost US$1-4bn per year and prevent 900-4000 air quality-related premature mortalities per year. Radiative forcing associated with reduction in atmospheric sulfate, nitrate, and ammonium loading is estimated at +3.4 mW/m(2) (equivalent to about 1/10th of the warming due to CO(2) emissions from aviation) and ULSJ increases life cycle CO(2) emissions by approximately 2%. The public health benefits are dominated by the reduction in cruise SO(x) emissions, so a key uncertainty is the atmospheric modeling of vertical transport of pollution from cruise altitudes to the ground. Comparisons of modeled and measured vertical profiles of CO, PAN, O(3), and (7)Be indicate that this uncertainty is low relative to uncertainties regarding the value of statistical life and the toxicity of fine particulate matter.
Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping
2014-01-01
Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutton, J.L.; Leovy, C.B.; Tillman, J.E.
1978-12-01
Wind speed, ambient and surface temperatures from both Viking Landers have been used to compute bulk Richardson numbers and Monin-Obukhov lengths during the earliest phase of the Mars missions. These parameters are used to estimate drag and heat transfer coefficients, friction velocities and surface heat fluxes at the two sites. The principal uncertainty is in the specification of the roughness length. Maximum heat fluxes occur near local noon at both sites, and are estimated to be in the range 15--20 W m/sup -2/ at the Viking 1 site and 10--15 W m/sup -2/ at the Viking 2 site. Maximum valuesmore » of friction velocity occur in late morning at Viking 1 and are estimated to be 0.4--0.6 m s/sup -1/. They occur shortly after drawn at the Viking 2 site where peak values are estimated to be in the range 0.25--0.35 m s/sup -1/. Extension of these calculations to later times during the mission will require allowance for dust opacity effects in the estimation of surface temperature and in the correction of radiation errors of the Viking 2 temperature sensor.« less
A Global Carbon Assimilation System using a modified EnKF assimilation method
NASA Astrophysics Data System (ADS)
Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.
2014-10-01
A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.
NASA Astrophysics Data System (ADS)
Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.
2010-08-01
Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
Characterising large scenario earthquakes and their influence on NDSHA maps
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.
2016-04-01
The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.
Using psychophysics to ask if the brain samples or maximizes
Acuna, Daniel E.; Berniker, Max; Fernandes, Hugo L.; Kording, Konrad P.
2015-01-01
The two-alternative forced-choice (2AFC) task is the workhorse of psychophysics and is used to measure the just-noticeable difference, generally assumed to accurately quantify sensory precision. However, this assumption is not true for all mechanisms of decision making. Here we derive the behavioral predictions for two popular mechanisms, sampling and maximum a posteriori, and examine how they affect the outcome of the 2AFC task. These predictions are used in a combined visual 2AFC and estimation experiment. Our results strongly suggest that subjects use a maximum a posteriori mechanism. Further, our derivations and experimental paradigm establish the already standard 2AFC task as a behavioral tool for measuring how humans make decisions under uncertainty. PMID:25767093
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Consequences of Secondary Calibrations on Divergence Time Estimates.
Schenk, John J
2016-01-01
Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1974-01-01
A prototype of a semi-real time system for synchronizing the Deep Space Net station clocks by radio interferometry was successfully demonstrated on August 30, 1972. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time sync estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 ns rms were achieved between Deep Space Stations 11 and 12, both at Goldstone, Calif. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to baseline and source position uncertainties and atmospheric effects are reached. These limitations are under 10 ns for transcontinental baselines.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2007-01-01
Space radiation presents major challenges to astronauts on the International Space Station and for future missions to the Earth s moon or Mars. Methods used to project risks on Earth need to be modified because of the large uncertainties in projecting cancer risks from space radiation, and thus impact safety factors. We describe NASA s unique approach to radiation safety that applies uncertainty based criteria within the occupational health program for astronauts: The two terrestrial criteria of a point estimate of maximum acceptable level of risk and application of the principle of As Low As Reasonably Achievable (ALARA) are supplemented by a third requirement that protects against risk projection uncertainties using the upper 95% confidence level (CL) in the radiation cancer projection model. NASA s acceptable level of risk for ISS and their new lunar program have been set at the point-estimate of a 3-percent risk of exposure induced death (REID). Tissue-averaged organ dose-equivalents are combined with age at exposure and gender-dependent risk coefficients to project the cumulative occupational radiation risks incurred by astronauts. The 95% CL criteria in practice is a stronger criterion than ALARA, but not an absolute cut-off as is applied to a point projection of a 3% REID. We describe the most recent astronaut dose limits, and present a historical review of astronaut organ doses estimates from the Mercury through the current ISS program, and future projections for lunar and Mars missions. NASA s 95% CL criteria is linked to a vibrant ground based radiobiology program investigating the radiobiology of high-energy protons and heavy ions. The near-term goal of research is new knowledge leading to the reduction of uncertainties in projection models. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. The current model for projecting space radiation cancer risk relies on the three assumptions of linearity, additivity, and scaling along with the use of population averages. We describe uncertainty estimates for this model, and new experimental data that sheds light on the accuracy of the underlying assumptions. These methods make it possible to express risk management objectives in terms of quantitative metrics, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits. The resulting methodology is applied to several human space exploration mission scenarios including lunar station, deep space outpost, and a Mars mission. Factors that dominate risk projection uncertainties and application of this approach to assess candidate mitigation approaches are described.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world's oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1 kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts.
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Marino, Patricia; Siani, Carole; Roché, Henri; Moatti, Jean-Paul
2005-01-01
The object of this study was to determine, taking into account uncertainty on cost and outcome parameters, the cost-effectiveness of high-dose chemotherapy (HDC) compared with conventional chemotherapy for advanced breast cancer patients. An analysis was conducted for 300 patients included in a randomized clinical trial designed to evaluate the benefits, in terms of disease-free survival and overall survival, of adding a single course of HDC to a four-cycle conventional-dose chemotherapy for breast cancer patients with axillary lymph node invasion. Costs were estimated from a detailed observation of physical quantities consumed, and the Kaplan-Meier method was used to evaluate mean survival times. Incremental cost-effectiveness ratios were evaluated successively considering disease-free survival and overall survival outcomes. Handling of uncertainty consisted in construction of confidence intervals for these ratios, using the truncated Fieller method. The cost per disease-free life year gained was evaluated at 13,074 Euros, a value that seems to be acceptable to society. However, handling uncertainty shows that the upper bound of the confidence interval is around 38,000 Euros, which is nearly three times higher. Moreover, as no difference was demonstrated in overall survival between treatments, cost-effectiveness analysis, that is a cost minimization, indicated that the intensive treatment is a dominated strategy involving an extra cost of 7,400 Euros, for no added benefit. Adding a single course of HDC led to a clinical benefit in terms of disease-free survival for an additional cost that seems to be acceptable, considering the point estimate of the ratio. However, handling uncertainty indicates a maximum ratio for which conclusions have to be discussed.
Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M
2017-09-13
All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O 2sat ) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O 2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O 2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Bopp, L.; Resplandy, L.; Untersee, A.; Le Mezo, P.; Kageyama, M.
2017-08-01
All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.
2017-12-01
The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
Maximum warming occurs about one decade after a carbon dioxide emission
NASA Astrophysics Data System (ADS)
Ricke, Katharine L.; Caldeira, Ken
2014-12-01
It is known that carbon dioxide emissions cause the Earth to warm, but no previous study has focused on examining how long it takes to reach maximum warming following a particular CO2 emission. Using conjoined results of carbon-cycle and physical-climate model intercomparison projects (Taylor et al 2012, Joos et al 2013), we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6-30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. If uncertainty in any one factor is reduced to zero without reducing uncertainty in the other factors, the majority of overall uncertainty remains. Thus, narrowing uncertainty in century-scale warming depends on narrowing uncertainty in all contributing factors. Our results indicate that benefit from avoided climate damage from avoided CO2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While such avoidance could be expected to benefit future generations, there is potential for emissions avoidance to provide substantial benefit to current generations.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
NASA Astrophysics Data System (ADS)
Wang, Yibing; Petit, Steven F.; Vásquez Osorio, Eliana; Gupta, Vikas; Méndez Romero, Alejandra; Heijmen, Ben
2018-06-01
In the abdomen, it is challenging to assess the accuracy of deformable image registration (DIR) for individual patients, due to the lack of clear anatomical landmarks, which can hamper clinical applications that require high accuracy DIR, such as adaptive radiotherapy. In this study, we propose and evaluate a methodology for estimating the impact of uncertainties in DIR on calculated accumulated dose in the upper abdomen, in order to aid decision making in adaptive treatment approaches. Sixteen liver metastasis patients treated with SBRT were evaluated. Each patient had one planning and three daily treatment CT-scans. Each daily CT scan was deformably registered 132 times to the planning CT-scan, using a wide range of parameter settings for the registration algorithm. A subset of ‘realistic’ registrations was then objectively selected based on distances between mapped and target contours. The underlying 3D transformations of these registrations were used to assess the corresponding uncertainties in voxel positions, and delivered dose, with a focus on accumulated maximum doses in the hollow OARs, i.e. esophagus, stomach, and duodenum. The number of realistic registrations varied from 5 to 109, depending on the patient, emphasizing the need for individualized registration parameters. Considering for all patients the realistic registrations, the 99th percentile of the voxel position uncertainties was 5.6 ± 3.3 mm. This translated into a variation (difference between 1st and 99th percentile) in accumulated D max in hollow OARs of up to 3.3 Gy. For one patient a violation of the accumulated stomach dose outside the uncertainty band was detected. The observed variation in accumulated doses in the OARs related to registration uncertainty, emphasizes the need to investigate the impact of this uncertainty for any DIR algorithm prior to clinical use for dose accumulation. The proposed method for assessing on an individual patient basis the impact of uncertainties in DIR on accumulated dose is in principle applicable for all DIR algorithms allowing variation in registration parameters.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
NASA Astrophysics Data System (ADS)
Maiti, Saumen; Tiwari, Ram Krishna
2010-10-01
A new probabilistic approach based on the concept of Bayesian neural network (BNN) learning theory is proposed for decoding litho-facies boundaries from well-log data. We show that how a multi-layer-perceptron neural network model can be employed in Bayesian framework to classify changes in litho-log successions. The method is then applied to the German Continental Deep Drilling Program (KTB) well-log data for classification and uncertainty estimation in the litho-facies boundaries. In this framework, a posteriori distribution of network parameter is estimated via the principle of Bayesian probabilistic theory, and an objective function is minimized following the scaled conjugate gradient optimization scheme. For the model development, we inflict a suitable criterion, which provides probabilistic information by emulating different combinations of synthetic data. Uncertainty in the relationship between the data and the model space is appropriately taken care by assuming a Gaussian a priori distribution of networks parameters (e.g., synaptic weights and biases). Prior to applying the new method to the real KTB data, we tested the proposed method on synthetic examples to examine the sensitivity of neural network hyperparameters in prediction. Within this framework, we examine stability and efficiency of this new probabilistic approach using different kinds of synthetic data assorted with different level of correlated noise. Our data analysis suggests that the designed network topology based on the Bayesian paradigm is steady up to nearly 40% correlated noise; however, adding more noise (˜50% or more) degrades the results. We perform uncertainty analyses on training, validation, and test data sets with and devoid of intrinsic noise by making the Gaussian approximation of the a posteriori distribution about the peak model. We present a standard deviation error-map at the network output corresponding to the three types of the litho-facies present over the entire litho-section of the KTB. The comparisons of maximum a posteriori geological sections constructed here, based on the maximum a posteriori probability distribution, with the available geological information and the existing geophysical findings suggest that the BNN results reveal some additional finer details in the KTB borehole data at certain depths, which appears to be of some geological significance. We also demonstrate that the proposed BNN approach is superior to the conventional artificial neural network in terms of both avoiding "over-fitting" and aiding uncertainty estimation, which are vital for meaningful interpretation of geophysical records. Our analyses demonstrate that the BNN-based approach renders a robust means for the classification of complex changes in the litho-facies successions and thus could provide a useful guide for understanding the crustal inhomogeneity and the structural discontinuity in many other tectonically complex regions.
NASA Astrophysics Data System (ADS)
Thompson, R. S.; Anderson, K.; Pelltier, R.; Strickland, L. E.; Shafer, S. L.; Bartlein, P. J.
2013-12-01
Fossil plant remains preserved in a variety of geologic settings provide direct evidence of where individual species lived in the past, and there are long-established methods for paleoclimatic reconstructions based on comparisons between modern and past geographic ranges of plant species. In principle, these methods use relatively straightforward procedures that frequently result in what appear to be very precise estimates of past temperature and moisture conditions. The reconstructed estimates can be mapped for specific time slices for synoptic-scale reconstructions for data-model comparisons. Although paleobotanical data can provide apparently precise estimates of past climatic conditions, it is difficult to gauge the associated uncertainties. The estimates may be affected by the choice of modern calibration data, reconstruction methods employed, and whether the climatic variable under consideration is an important determinant of the distributions of the species being considered. For time-slice reconstructions, there are also issues involving the adequacy of the spatial coverage of the fossil data and the degree of variability through time. To examine some of these issues, we estimated annual precipitation and summer and winter temperatures for the Last Glacial Maximum (LGM, 21000 × 1000 yr BP), Middle Holocene (MH, 6000 × 500 yr BP), and Latest Holocene (LH, the last 500 yrs), based on the application of four quantitative approaches to paleobotanical assemblages preserved in packrat middens in the American Southwest. Our results indicate that historic variability and difficulties in interpolating climatic values to fossil sites may impose ranges of uncertainties of more than × 1°C for temperature and × 50 mm for annual precipitation. Climatic estimates based on modern midden assemblages generally fall within these ranges, although there may be biases that differ regionally. Samples of similar age and location provide similar climatic estimates, and the four approaches usually result in anomalies of the same sign, but with differing amplitudes. There is considerable variability among the anomalies for samples within each time slice, and different time slices have different geographic coverages of samples. The reconstructed temperature anomalies are similar between the MH and LH time slices, and generally fall within the uncertainties related to the modern climatic data. LGM anomalies were significantly colder, and for many samples exceeded -5°C in both winter and summer. There are what appear to be significant MH annual precipitation anomalies to the south (dry after 6.2 ka)and to the northwest (wet before 6.2 ka), but it may be misleading to compare these, given the differences in age. Positive annual precipitation anomalies for the LGM are more than 100 mm in the northwest, and smaller in the northeast and south.
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Hanley, O; Gutiérrez-Villanueva, J L; Currivan, L; Pollard, D
2008-10-01
The RPII radon (Rn) laboratory holds accreditation for the International Standard ISO/IEC 17025. A requirement of this standard is an estimate of the uncertainty of measurement. This work shows two approaches to estimate the uncertainty. The bottom-up approach involved identifying the components that were found to contribute to the uncertainty. Estimates were made for each of these components, which were combined to give a combined uncertainty of 13.5% at a Rn concentration of approximately 2500 Bq m(-3) at the 68% confidence level. By applying a coverage factor of k=2, the expanded uncertainty is +/-27% at the 95% confidence level. The top-down approach used information previously gathered from intercomparison exercises to estimate the uncertainty. This investigation found an expanded uncertainty of +/-22% at approximately 95% confidence level. This is good agreement for such independent estimates.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
NASA Astrophysics Data System (ADS)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
We show within a very simple framework that different measures of fluctuations lead to uncertainty relations resulting in contradictory conclusions. More specifically we focus on Tsallis and Renyi entropic uncertainty relations and we get that the minimum joint uncertainty states for some fluctuation measures are the maximum joint uncertainty states of other fluctuation measures, and vice versa.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vostrotin, Vadim; Birchall, Alan; Zhdanov, Alexey
The distribution of calculated internal doses was determined for 8043 Mayak Production Associate (Mayak PA) workers according to the epidemiological cohorts and groups of raw data used as well as the type of industrial compounds of inhaled aerosols. Statistical characteristics of point estimates of accumulated doses to 17 different tissues and organs and the uncertainty ranges were calculated. Under the MWDS-2013 dosimetry system, the mean accumulated lung dose was 185585 mGy, with a median value of 31 mGy and a maximum of 8980 mGy maximum. The ranges of relative standard uncertainty were: from 40 to 2200% for accumulated lung dose,more » from 25-90% to 2600-3000% for accumulated dose to different regions of respiratory tract, from 13-18% to 2300-2500% for systemic organs and tissues. The Mayak PA workers accumulated internal plutonium lung dose is shown to be close to lognormal. The accumulated internal plutonium dose to systemic organs was close to a log-triangle. The dependency of uncertainty of accumulated absorbed lung and liver doses on the dose estimates itself is also shown. The accumulated absorbed doses to lung, alveolar-interstitial region, liver, bone surface cells and red bone marrow, calculated both with MWDS-2013 and MWDS-2008 have been compared. In general, the accumulated lung doses increased by a factor of 1.8 in median value, while the accumulated doses to systemic organs decreased by factor of 1.3-1.4 in median value. For the cases with identical initial data, accumulated lung doses increased by a factor of 2.1 in median value, while accumulated doses to systemic organs decreased by 8-13% in median value. For the cases with both identical initial data and all of plutonium activity in urine measurements above the decision threshold, accumulated lung doses increased by a factor of 2.8 in median value, while accumulated doses to systemic organs increased by 6-12% in median value.« less
NASA Astrophysics Data System (ADS)
Roy, Mathieu
Natural inflow is an important data for a water resource manager. In fact, Hydro-Quebec uses historical natural inflow data to perform a daily prediction of the amount of water that will be received in each of its hydroelectric reservoirs. This prediction allows the establishment of reservoir operating rules in order to optimize hydropower without compromising the safety of hydraulic structures. To obtain an accurate prediction, it follows that the system's input needs to be very well known. However, it can be very difficult to accurately measure the natural supply of a set of regulated reservoirs. Therefore, Hydro-Quebec uses an indirect method of calculation. This method consists of evaluating the reservoir's inflow using the water balance equation. Yet, this equation is not immune to errors and uncertainties. Water level measurement is an important input in order to compute the water balance equation. However, several sources of uncertainty including the effect of wind and hydraulic maneuvers can affect the readings of limnimetric gages. Fluctuations in water level caused by these effects carry over in the water balance equation. Consequently, natural inflow's signal may become noisy and affected by external errors. The main objective of this report is to evaluate the uncertainty caused by the effects of wind and hydraulic maneuvers on water balance equation. To this end, hydrodynamic models of reservoirs Outardes 4 and Gouin were prepared. According to the literature review, wind effects can be studied either by an unsteady state approach or by assuming steady state approach. Unsteady state simulation of wind effects on reservoir Gouin and Outardes 4 were performed by hydrodynamic modelling. Consideration of an unsteady state implies that the wind conditions vary throughout the simulation. This feature allows taking into account temporal effect of wind duration. In addition, it also allows the consideration of inertial forces such as seiches which are caused by wind conditions that can vary abruptly. Once the models were calibrated, unsteady state simulations were conducted in closed system where unsteady observed winds were the only forces included. From the simulated water levels obtained at each gage, water balance equation was calculated to determine the daily uncertainty of natural inflow in unsteady conditions. At Outardes 4, a maximum uncertainty of 20 m3/s was estimated during the month of October 2010. On the other hand, at the Gouin reservoir, a maximum uncertainty of 340m3/s was estimated during the month of July 2012. Steady state modelling is another approach to evaluate wind effect uncertainty in the water balance equation. This type of approach consists of assuming that the water level is instantly tilted under the influence of wind. Hence, temporal effect of wind duration and seiches cannot be taken into account. However, the advantage of steady state modelling is that it's better suited than unsteady state modelling to evaluate wind uncertainty in real time. Two steady state modelling methods were experimented to estimate water level difference between gages in function of wind characteristics: hydrodynamic modelling and non-parametric regression. It has been found that non-parametric models are more efficient when it comes to estimate water level differences between gages. However, the use of hydrodynamic model demonstrated that to study wind uncertainty in the water balance equation, it is preferable to assess wind responses individually at each gage instead of using water level differences. Finally, a combination method of water level gages observations has been developed. It allows reducing wind/hydraulic maneuvers impacts on the water balance equation. This method, which is applicable in real time, consists of assigning a variable weight at each limnimetric gages. In other words, the weights automatically adjust in order to minimize steady state modeled wind responses. The estimation of hydraulic maneuvers has also been included in the gage weight adjustment. It has been found that this new combination method allows the correction of noisy natural inflow signal under wind and hydraulic maneuvers effects. However, some fluctuations persist which reflects the complexity of correcting these effects on a real time based daily water balance equation. (Abstract shortened by UMI.).
Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh
NASA Astrophysics Data System (ADS)
Mortuza, M. R.; Demissie, Y.; Li, H. Y.
2014-12-01
Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.
Predicting the Earth encounters of (99942) Apophis
NASA Technical Reports Server (NTRS)
Giorgini, Jon D.; Benner, Lance A. M.; Ostro, Steven J.; Nolan, Michael C.; Busch, Michael W.
2007-01-01
Arecibo delay-Doppler measurements of (99942) Apophis in 2005 and 2006 resulted in a five standard-deviation trajectory correction to the optically predicted close approach distance to Earth in 2029. The radar measurements reduced the volume of the statistical uncertainty region entering the encounter to 7.3% of the pre-radar solution, but increased the trajectory uncertainty growth rate across the encounter by 800% due to the closer predicted approach to the Earth. A small estimated Earth impact probability remained for 2036. With standard-deviation plane-of-sky position uncertainties for 2007-2010 already less than 0.2 arcsec, the best near-term ground-based optical astrometry can only weakly affect the trajectory estimate. While the potential for impact in 2036 will likely be excluded in 2013 (if not 2011) using ground-based optical measurements, approximations within the Standard Dynamical Model (SDM) used to estimate and predict the trajectory from the current era are sufficient to obscure the difference between a predicted impact and a miss in 2036 by altering the dynamics leading into the 2029 encounter. Normal impact probability assessments based on the SDM become problematic without knowledge of the object's physical properties; impact could be excluded while the actual dynamics still permit it. Calibrated position uncertainty intervals are developed to compensate for this by characterizing the minimum and maximum effect of physical parameters on the trajectory. Uncertainty in accelerations related to solar radiation can cause between 82 and 4720 Earth-radii of trajectory change relative to the SDM by 2036. If an actionable hazard exists, alteration by 2-10% of Apophis' total absorption of solar radiation in 2018 could be sufficient to produce a six standard-deviation trajectory change by 2036 given physical characterization; even a 0.5% change could produce a trajectory shift of one Earth-radius by 2036 for all possible spin-poles and likely masses. Planetary ephemeris uncertainties are the next greatest source of systematic error, causing up to 23 Earth-radii of uncertainty. The SDM Earth point-mass assumption introduces an additional 2.9 Earth-radii of prediction error by 2036. Unmodeled asteroid perturbations produce as much as 2.3 Earth-radii of error. We find no future small-body encounters likely to yield an Apophis mass determination prior to 2029. However, asteroid (144898) 2004 VD17, itself having a statistical Earth impact in 2102, will probably encounter Apophis at 6.7 lunar distances in 2034, their uncertainty regions coming as close as 1.6 lunar distances near the center of both SDM probability distributions.
Anisotropy in the Microwave Sky at 90 GHz: Results from Python III
NASA Astrophysics Data System (ADS)
Platt, S. R.; Kovac, J.; Dragovan, M.; Peterson, J. B.; Ruhl, J. E.
1997-01-01
The third year of observations with the Python microwave background experiment densely samples a 5.5d × 22° region of sky that includes the fields measured during the first 2 years of observations with this instrument. The sky is sampled in two multipole bands centered at l ~ 87 and l ~ 170. These two data sets are analyzed to place limits on fluctuations in the microwave sky at 90 GHz. Interpreting the observed fluctuations as anisotropy in the cosmic microwave background, we find flat-band power estimates of δTl ≡ [l(l + 1)Cl/(2π)]1/2 = 60+15-13 μK at l = 87+18-38 and δTl = 66+17-16 μK at l = 170+69-50. Combining the entire 3 year set of Python observations, we find that the angular power spectrum of fluctuations has a spectral index m = 0.16+.20-.18 and an amplitude δTle = 63+15-14 μK at le = 139+99-34 for the functional form δTl = δTle(l/le)m. The stated uncertainties in the amplitudes and spectral index represent 1 σ confidence intervals in the likelihood added in quadrature with a 20% calibration uncertainty and an estimate of the effects of a +/-0.05d uncertainty in the measured beamwidths. The limits of l are determined from the half-maximum points of the window functions.
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations
NASA Astrophysics Data System (ADS)
Gilmour, E.; Daub, E. G.
2017-12-01
Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are consistent with earthquakes observed in Oklahoma and find which parameters effect the rupture propagation. Our results show that the stress orientation and magnitude, pore pressure, and friction properties combine to determine the final magnitude of the simulated event.
Assessing concentration uncertainty estimates from passive microwave sea ice products
NASA Astrophysics Data System (ADS)
Meier, W.; Brucker, L.; Miller, J. A.
2017-12-01
Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.
NASA Astrophysics Data System (ADS)
Ziemann, Astrid; Starke, Manuela; Schütze, Claudia
2017-11-01
An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single measurement. Instantaneous wind components can be derived with a maximum uncertainty of 0.3 m s-1 depending on sampling, signal analysis, and environmental influences on sound propagation. Averaging over a period of 30 min, the standard error of the mean values can be decreased by a factor of at least 0.5 for OP-FTIR and 0.1 for A-TOM depending on the required spatial resolution. The presented validation of the joint application of the two independent, nonintrusive methods is in the focus of attention concerning their ability to quantify advective fluxes.
Tropical Africa: Land use, biomass, and carbon estimates for 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.; Gaston, G.; Daniels, R.C.
1996-06-01
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980 and describes a methodology that may be used to extend this data set to 1990 and beyond based on population and land cover data. The biomass data and carbon estimates are for woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with the possible magnitude of historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth`s landmore » surface and includes those countries that for the most part are located in Tropical Africa. Countries bordering the Mediterranean Sea and in southern Africa (i.e., Egypt, Libya, Tunisia, Algeria, Morocco, South Africa, Lesotho, Swaziland, and Western Sahara) have maximum potential biomass and land cover information but do not have biomass or carbon estimate. The database was developed using the GRID module in the ARC/INFO{sup TM} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass-carbon values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Sources of uncertainty in annual forest inventory estimates
Ronald E. McRoberts
2000-01-01
Although design and estimation aspects of annual forest inventories have begun to receive considerable attention within the forestry and natural resources communities, little attention has been devoted to identifying the sources of uncertainty inherent in these systems or to assessing the impact of those uncertainties on the total uncertainties of inventory estimates....
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
Uncertainty of fast biological radiation dose assessment for emergency response scenarios.
Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens
2017-01-01
Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Pretest uncertainty analysis for chemical rocket engine tests
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1987-01-01
A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.
The global distribution of ammonia emissions from seabird colonies
NASA Astrophysics Data System (ADS)
Riddick, S. N.; Dragosits, U.; Blackall, T. D.; Daunt, F.; Wanless, S.; Sutton, M. A.
2012-08-01
Seabird colonies represent a significant source of atmospheric ammonia (NH3) in remote maritime systems, producing a source of nitrogen that may encourage plant growth, alter terrestrial plant community composition and affect the surrounding marine ecosystem. To investigate seabird NH3 emissions on a global scale, we developed a contemporary seabird database including a total seabird population of 261 million breeding pairs. We used this in conjunction with a bioenergetics model to estimate the mass of nitrogen excreted by all seabirds at each breeding colony. The results combined with the findings of mid-latitude field studies of volatilization rates estimate the global distribution of NH3 emissions from seabird colonies on an annual basis. The largest uncertainty in our emission estimate concerns the potential temperature dependence of NH3 emission. To investigate this we calculated and compared temperature independent emission estimates with a maximum feasible temperature dependent emission, based on the thermodynamic dissociation and solubility equilibria. Using the temperature independent approach, we estimate global NH3 emissions from seabird colonies at 404 Gg NH3 per year. By comparison, since most seabirds are located in relatively cold circumpolar locations, the thermodynamically dependent estimate is 136 Gg NH3 per year. Actual global emissions are expected to be within these bounds, as other factors, such as non-linear interactions with water availability and surface infiltration, moderate the theoretical temperature response. Combining sources of error from temperature (±49%), seabird population estimates (±36%), variation in diet composition (±23%) and non-breeder attendance (±13%), gives a mid estimate with an overall uncertainty range of NH3 emission from seabird colonies of 270 [97-442] Gg NH3 per year. These emissions are environmentally relevant as they primarily occur as "hot-spots" in otherwise pristine environments with low anthropogenic emissions.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; de Bruijn, Karin; Bouwer, Laurens; de Moel, Hans
2015-04-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. This Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. This uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.
2015-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Park, Daeryong; Roesner, Larry A
2012-12-15
This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tralli, David M.; Lichten, Stephen M.; Herring, Thomas A.
1992-01-01
Kalman filter estimates of zenith nondispersive atmospheric path delays at Westford, Massachusetts, Fort Davis, Texas, and Mojave, California, were obtained from independent analyses of data collected during January and February 1988 using the GPS and VLBI. The apparent accuracy of the path delays is inferred by examining the estimates and covariances from both sets of data. The ability of the geodetic data to resolve zenith path delay fluctuations is determined by comparing further the GPS Kalman filter estimates with corresponding wet path delays derived from water vapor radiometric data available at Mojave over two 8-hour data spans within the comparison period. GPS and VLBI zenith path delay estimates agree well within one standard deviation formal uncertainties (from 10-20 mm for GPS and 3-15 mm for VLBI) in four out of the five possible comparisons, with maximum differences of 5 and 21 mm over 8- to 12-hour data spans.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Uncertainty factors in screening ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, L.D.; Taggart, M.
2000-06-01
The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less
Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient
NASA Astrophysics Data System (ADS)
Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.
2018-04-01
The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
A mesic maximum in biological water use demarcates biome sensitivity to aridity shifts.
Good, Stephen P; Moore, Georgianne W; Miralles, Diego G
2017-12-01
Biome function is largely governed by how efficiently available resources can be used and yet for water, the ratio of direct biological resource use (transpiration, E T ) to total supply (annual precipitation, P) at ecosystem scales remains poorly characterized. Here, we synthesize field, remote sensing and ecohydrological modelling estimates to show that the biological water use fraction (E T /P) reaches a maximum under mesic conditions; that is, when evaporative demand (potential evapotranspiration, E P ) slightly exceeds supplied precipitation. We estimate that this mesic maximum in E T /P occurs at an aridity index (defined as E P /P) between 1.3 and 1.9. The observed global average aridity of 1.8 falls within this range, suggesting that the biosphere is, on average, configured to transpire the largest possible fraction of global precipitation for the current climate. A unimodal E T /P distribution indicates that both dry regions subjected to increasing aridity and humid regions subjected to decreasing aridity will suffer declines in the fraction of precipitation that plants transpire for growth and metabolism. Given the uncertainties in the prediction of future biogeography, this framework provides a clear and concise determination of ecosystems' sensitivity to climatic shifts, as well as expected patterns in the amount of precipitation that ecosystems can effectively use.
Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca
2018-05-08
Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.
2016-01-01
The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.
NASA Astrophysics Data System (ADS)
Costa, F. A. F.; Keir, G.; McIntyre, N.; Bulovic, N.
2015-12-01
Most groundwater supply bores in Australia do not have flow metering equipment and so regional groundwater abstraction rates are not well known. Past estimates of unmetered abstraction for regional numerical groundwater modelling typically have not attempted to quantify the uncertainty inherent in the estimation process in detail. In particular, the spatial properties of errors in the estimates are almost always neglected. Here, we apply Bayesian spatial models to estimate these abstractions at a regional scale, using the state-of-the-art computationally inexpensive approaches of integrated nested Laplace approximation (INLA) and stochastic partial differential equations (SPDE). We examine a case study in the Condamine Alluvium aquifer in southern Queensland, Australia; even in this comparatively data-rich area with extensive groundwater abstraction for agricultural irrigation, approximately 80% of bores do not have reliable metered flow records. Additionally, the metering data in this area are characterised by complicated statistical features, such as zero-valued observations, non-normality, and non-stationarity. While this precludes the use of many classical spatial estimation techniques, such as kriging, our model (using the R-INLA package) is able to accommodate these features. We use a joint model to predict both probability and magnitude of abstraction from bores in space and time, and examine the effect of a range of high-resolution gridded meteorological covariates upon the predictive ability of the model. Deviance Information Criterion (DIC) scores are used to assess a range of potential models, which reward good model fit while penalising excessive model complexity. We conclude that maximum air temperature (as a reasonably effective surrogate for evapotranspiration) is the most significant single predictor of abstraction rate; and that a significant spatial effect exists (represented by the SPDE approximation of a Gaussian random field with a Matérn covariance function). Our final model adopts air temperature, solar exposure, and normalized difference vegetation index (NDVI) as covariates, shows good agreement with previous estimates at a regional scale, and additionally offers rigorous quantification of uncertainty in the estimate.
NASA Astrophysics Data System (ADS)
Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John
2011-12-01
Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Reda, I.
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expressionmore » of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).« less
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world’s oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts. PMID:26226590
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Howell, Fergus W.; Haywood, Alan M.; Dolan, Aisling M.; Dowsett, Harry J.; Francis, Jane E; Hill, Daniel J.; Pickering, Steven J.; Pope, James O.; Salzmann, Ulrich; Wade, Bidget S
2014-01-01
General Circulation Model simulations of the mid-Pliocene warm period (mPWP, 3.264 to 3.025 Myr ago) currently underestimate the level of warming that proxy data suggest existed at high latitudes, with discrepancies of up to 11°C for sea surface temperature estimates and 17°C for surface air temperature estimates. Sea ice has a strong influence on high-latitude climates, partly due to the albedo feedback. We present results demonstrating the effects of reductions in minimum sea ice albedo limits in general circulation model simulations of the mPWP. While mean annual surface air temperature increases of up to 6°C are observed in the Arctic, the maximum decrease in model-data discrepancies is just 0.81°C. Mean annual sea surface temperatures increase by up to 2°C, with a maximum model-data discrepancy improvement of 1.31°C. It is also suggested that the simulation of observed 21st century sea ice decline could be influenced by the adjustment of the sea ice albedo parameterization.
Verification of the Uncertainty Principle by Using Diffraction of Light Waves
ERIC Educational Resources Information Center
Nikolic, D.; Nesic, Lj
2011-01-01
We described a simple idea for experimental verification of the uncertainty principle for light waves. We used a single-slit diffraction of a laser beam for measuring the angular width of zero-order diffraction maximum and obtained the corresponding wave number uncertainty. We will assume that the uncertainty in position is the slit width. For the…
Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity
NASA Astrophysics Data System (ADS)
Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.
2018-07-01
The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
Multiple indicator cokriging with application to optimal sampling for environmental monitoring
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2005-02-01
A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzk
Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.
Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja
2015-06-01
Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.
NASA Astrophysics Data System (ADS)
Ono, T.; Takahashi, T.
2017-12-01
Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
NASA Astrophysics Data System (ADS)
West, A. C.; Novakowski, K. S.
2005-12-01
Regional groundwater flow models are rife with uncertainty. The three-dimensional flux vector fields must generally be inferred using inverse modelling from sparse measurements of hydraulic head, from measurements of hydraulic parameters at a scale that is miniscule in comparison to that of the domain, and from none to a very few measurements of recharge or discharge rate. Despite the inherent uncertainty in these models they are routinely used to delineate steady-state or time-of-travel capture zones for the purpose of wellhead protection. The latter are defined as the volume of the aquifer within which released particles will arrive at the well within the specified time and their delineation requires the additional step of dividing the magnitudes of the flux vectors by the assumed porosity to arrive at the ``average linear groundwater velocity'' vector field. Since the porosity is usually assumed constant over the domain one could be forgiven for thinking that the uncertainty introduced at this step is minor in comparison to the flow model calibration step. We consider this question when the porosity in question is fracture porosity in flat-lying sedimentary bedrock. We also consider whether or not the diffusive uptake of solute into the rock matrix which lies between the source and the production well reduces or enhances the uncertainty. To evaluate the uncertainty an aquifer cross section is conceptualized as an array of horizontal, randomly-spaced, parallel-plate fractures of random aperture, with adjacent horizontal fractures connected by vertical fractures again of random spacing and aperture. The source is assumed to be a continuous concentration (i.e. a dirichlet boundary condition) representing a leaking tank or a DNAPL pool, and the receptor is a fully pentrating well located in the down-gradient direction. In this context the time-of-travel capture zone is defined as the separation distance required such that the source does not contaminate the well beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.
Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.
2010-01-01
Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049
Quantifying uncertainty in discharge measurements: A new approach
Kiang, J.E.; Cohn, T.A.; Mason, R.R.
2009-01-01
The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.
Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L
2012-01-01
The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123
SUB-PIXEL RAINFALL VARIABILITY AND THE IMPLICATIONS FOR UNCERTAINTIES IN RADAR RAINFALL ESTIMATES
Radar estimates of rainfall are subject to significant measurement uncertainty. Typically, uncertainties are measured by the discrepancies between real rainfall estimates based on radar reflectivity and point rainfall records of rain gauges. This study investigates how the disc...
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
2017-11-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate, and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been fully investigated and thus differing PMP estimates are sometimes obtained without physics-based interpretations. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is modified and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The hybrid approach produced consistent historical PMP estimates as the traditional estimates. PMP in the PNW will increase by 50% ± 30% of the current design PMP by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus, long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung
2017-12-22
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy
2012-11-15
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measuremore » of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was Less-Than-Or-Slanted-Equal-To 3 mm (95th percentiles within {+-}4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from -0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of Less-Than-Or-Slanted-Equal-To 5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance.« less
Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M.; Tempany, Clare M.; Cormack, Robert A.
2012-01-01
Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was ⩽3 mm (95th percentiles within ±4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from −0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of ⩽5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance. PMID:23127078
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
Rodomonte, Andrea Luca; Montinaro, Annalisa; Bartolomei, Monica
2006-09-11
A measurement result cannot be properly interpreted if not accompanied by its uncertainty. Several methods to estimate uncertainty have been developed. From those methods three in particular were chosen in this work to estimate the uncertainty of the Eu. Ph. chloroquine phosphate assay, a potentiometric titration commonly used in medicinal control laboratories. The famous error-budget approach (also called bottom-up or step-by-step) described by the ISO Guide to the expression of Uncertainty in Measurement (GUM) was the first method chosen. It is based on the combination of uncertainty contributions that have to be directly derived from the measurement process. The second method employed was the Analytical Method Committee top-down which estimates uncertainty through reproducibility obtained during inter-laboratory studies. Data for its application were collected in a proficiency testing study carried out by over 50 laboratories throughout Europe. The last method chosen was the one proposed by Barwick and Ellison. It uses a combination of precision, trueness and ruggedness data to estimate uncertainty. These data were collected from a validation process specifically designed for uncertainty estimation. All the three approaches presented a distinctive set of advantages and drawbacks in their implementation. An expanded uncertainty of about 1% was assessed for the assay investigated.
McCann, Jamie; Stuessy, Tod F.; Villaseñor, Jose L.; Weiss-Schneeweiss, Hanna
2016-01-01
Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction. PMID:27611687
McCann, Jamie; Schneeweiss, Gerald M; Stuessy, Tod F; Villaseñor, Jose L; Weiss-Schneeweiss, Hanna
2016-01-01
Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction.
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel
2017-04-01
Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.
The impact of land use on estimates of pesticide leaching potential: Assessments and uncertainties
NASA Astrophysics Data System (ADS)
Loague, Keith
1991-11-01
This paper illustrates the magnitude of uncertainty which can exist for pesticide leaching assessments, due to data uncertainties, both between soil orders and within a single soil order. The current work differs from previous efforts because the impact of uncertainty in recharge estimates is considered. The examples are for diuron leaching in the Pearl Harbor Basin. The results clearly indicate that land use has a significant impact on both estimates of pesticide leaching potential and the uncertainties associated with those estimates. It appears that the regulation of agricultural chemicals in the future should include consideration for changing land use.
An Extensive Unified Thermo-Electric Module Characterization Method
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-01-01
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575
An Extensive Unified Thermo-Electric Module Characterization Method.
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-12-13
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.
Treatment planning for prostate focal laser ablation in the face of needle placement uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cepek, Jeremy, E-mail: jcepek@robarts.ca; Fenster, Aaron; Lindner, Uri
2014-01-15
Purpose: To study the effect of needle placement uncertainty on the expected probability of achieving complete focal target destruction in focal laser ablation (FLA) of prostate cancer. Methods: Using a simplified model of prostate cancer focal target, and focal laser ablation region shapes, Monte Carlo simulations of needle placement error were performed to estimate the probability of completely ablating a region of target tissue. Results: Graphs of the probability of complete focal target ablation are presented over clinically relevant ranges of focal target sizes and shapes, ablation region sizes, and levels of needle placement uncertainty. In addition, a table ismore » provided for estimating the maximum target size that is treatable. The results predict that targets whose length is at least 5 mm smaller than the diameter of each ablation region can be confidently ablated using, at most, four laser fibers if the standard deviation in each component of needle placement error is less than 3 mm. However, targets larger than this (i.e., near to or exceeding the diameter of each ablation region) require more careful planning. This process is facilitated by using the table provided. Conclusions: The probability of completely ablating a focal target using FLA is sensitive to the level of needle placement uncertainty, especially as the target length approaches and becomes greater than the diameter of ablated tissue that each individual laser fiber can achieve. The results of this work can be used to help determine individual patient eligibility for prostate FLA, to guide the planning of prostate FLA, and to quantify the clinical benefit of using advanced systems for accurate needle delivery for this treatment modality.« less
Consistency of Estimated Global Water Cycle Variations Over the Satellite Era
NASA Technical Reports Server (NTRS)
Robertson, F. R.; Bosilovich, M. G.; Roberts, J. B.; Reichle, R. H.; Adler, R.; Ricciardulli, L.; Berg, W.; Huffman, G. J.
2013-01-01
Motivated by the question of whether recent indications of decadal climate variability and a possible "climate shift" may have affected the global water balance, we examine evaporation minus precipitation (E-P) variability integrated over the global oceans and global land from three points of view-remotely sensed retrievals / objective analyses over the oceans, reanalysis vertically-integrated moisture convergence (MFC) over land, and land surface models forced with observations-based precipitation, radiation and near-surface meteorology. Because monthly variations in area-averaged atmospheric moisture storage are small and the global integral of moisture convergence must approach zero, area-integrated E-P over ocean should essentially equal precipitation minus evapotranspiration (P-ET) over land (after adjusting for ocean and land areas). Our analysis reveals considerable uncertainty in the decadal variations of ocean evaporation when integrated to global scales. This is due to differences among datasets in 10m wind speed and near-surface atmospheric specific humidity (2m qa) used in bulk aerodynamic retrievals. Precipitation variations, all relying substantially on passive microwave retrievals over ocean, still have uncertainties in decadal variability, but not to the degree present with ocean evaporation estimates. Reanalysis MFC and P-ET over land from several observationally forced diagnostic and land surface models agree best on interannual variations. However, upward MFC (i.e. P-ET) reanalysis trends are likely related in part to observing system changes affecting atmospheric assimilation models. While some evidence for a low-frequency E-P maximum near 2000 is found, consistent with a recent apparent pause in sea-surface temperature (SST) rise, uncertainties in the datasets used here remain significant. Prospects for further reducing uncertainties are discussed. The results are interpreted in the context of recent climate variability (Pacific Decadal Oscillation, Atlantic Meridional Overturning), and efforts to distinguish these modes from longer-term trends.
Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar
2015-06-01
Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
NASA Technical Reports Server (NTRS)
Hyland, D. C.; Bernstein, D. S.
1987-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
An analysis of interplanetary space radiation exposure for various solar cycles
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Cucinotta, F. A.; O'Neill, P. M.; Wilson, J. W. (Principal Investigator)
1994-01-01
The radiation dose received by crew members in interplanetary space is influenced by the stage of the solar cycle. Using the recently developed models of the galactic cosmic radiation (GCR) environment and the energy-dependent radiation transport code, we have calculated the dose at 0 and 5 cm water depth; using a computerized anatomical man (CAM) model, we have calculated the skin, eye and blood-forming organ (BFO) doses as a function of aluminum shielding for various solar minima and maxima between 1954 and 1989. These results show that the equivalent dose is within about 15% of the mean for the various solar minima (maxima). The maximum variation between solar minimum and maximum equivalent dose is about a factor of three. We have extended these calculations for the 1976-1977 solar minimum to five practical shielding geometries: Apollo Command Module, the least and most heavily shielded locations in the U.S. space shuttle mid-deck, center of the proposed Space Station Freedom cluster and sleeping compartment of the Skylab. These calculations, using the quality factor of ICRP 60, show that the average CAM BFO equivalent dose is 0.46 Sv/year. Based on an approach that takes fragmentation into account, we estimate a calculation uncertainty of 15% if the uncertainty in the quality factor is neglected.
Pore fluids and the LGM ocean salinity-Reconsidered
NASA Astrophysics Data System (ADS)
Wunsch, Carl
2016-03-01
Pore fluid chlorinity/salinity data from deep-sea cores related to the salinity maximum of the last glacial maximum (LGM) are analyzed using estimation methods deriving from linear control theory. With conventional diffusion coefficient values and no vertical advection, results show a very strong dependence upon initial conditions at -100 ky. Earlier inferences that the abyssal Southern Ocean was strongly salt-stratified in the LGM with a relatively fresh North Atlantic Ocean are found to be consistent within uncertainties of the salinity determination, which remain of order ±1 g/kg. However, an LGM Southern Ocean abyss with an important relative excess of salt is an assumption, one not required by existing core data. None of the present results show statistically significant abyssal salinity values above the global average, and results remain consistent, apart from a general increase owing to diminished sea level, with a more conventional salinity distribution having deep values lower than the global mean. The Southern Ocean core does show a higher salinity than the North Atlantic one on the Bermuda Rise at different water depths. Although much more sophisticated models of the pore-fluid salinity can be used, they will only increase the resulting uncertainties, unless considerably more data can be obtained. Results are consistent with complex regional variations in abyssal salinity during deglaciation, but none are statistically significant.
Grunewald, E.D.; Stein, R.S.
2006-01-01
In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
NASA Technical Reports Server (NTRS)
Norbury, John W.
1992-01-01
Single nucleon removal in relativistic and intermediate energy nucleus-nucleus collisions is studied using a generalization of Weizsacker-Williams theory that treats each electromagnetic multipole separately. Calculations are presented for electric dipole and quadrupole excitations and incorporate a realistic minimum impact parameter, Coulomb recoil corrections, and the uncertainties in the input photonuclear data. Discrepancies are discussed. The maximum quadrupole effect to be observed in future experiments is estimated and also an analysis of the charge dependence of the electromagnetic cross sections down to energies as low as 100 MeV/nucleon is made.
NASA Technical Reports Server (NTRS)
Norbury, J. W.; Townsend, L. W. (Principal Investigator)
1990-01-01
Single-nucleon removal in relativistic and intermediate energy nucleus-nucleus collisions is studied using a generalization of Weizsacker-Williams theory that treats each electromagnetic multipole separately. Calculations are presented for electric dipole and quadrupole excitations and incorporate a realistic minimum impact parameter, Coulomb recoil corrections, and the uncertainties in the input photonuclear data. Discrepancies are discussed. The maximum quadrupole effect to be observed in future experiments is estimated and also an analysis of the charge dependence of the electromagnetic cross sections down to energies as low as 100 MeV/nucleon is made.
NASA Astrophysics Data System (ADS)
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
NASA Astrophysics Data System (ADS)
Rossby, T.; Reverdin, Gilles; Chafik, Leon; Søiland, Henrik
2017-07-01
The meridional overturning circulation (MOC) in the North Atlantic plays a major role in the transport of heat from low to high latitudes. In this study, we combine recent measurements of currents from the surface to >700 m from a shipboard acoustic Doppler current profiler with Argo profiles (to 2000 m) to estimate poleward volume, heat, and freshwater flux at 59.5°N between Greenland and Scotland. This is made possible thanks to the vessel Nuka Arctica that operates on a 3 week schedule between Greenland and Denmark. For the period late 2012 to early 2016, the deseasoned mean meridional overturning circulation reaches a 18.4 ± 3.4 Sv maximum at the σθ = 27.55 kg m-3 isopycnal, which varies in depth from near the surface in the western Irminger Sea to 1000 m in Rockall Trough. The total heat and freshwater fluxes across 59.5°N = 399 ± 74 TW and -0.20 ± 0.04 Sv, where the uncertainties are principally due to that of the MOC. Analysis of altimetric sea surface height variations along exactly the same route reveals a somewhat stronger geostrophic flow north during this period compared to the 23 year mean suggesting that for a long-term mean the above flux estimates should be reduced slightly to 17.4 Sv, 377 TW, and -0.19 Sv, respectively, with the same estimate uncertainties. The ADCP program is ongoing.
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi
2014-09-01
Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Chappell, Lori J.; Wang, Minli; Kim, Myung-Hee
2011-01-01
The uncertainties in estimating the health risks from galactic cosmic rays (GCR) and solar particle events (SPE) are a major limitation to the length of space missions and the evaluation of potential risk mitigation approaches. NASA limits astronaut exposures to a 3% risk of exposure induced cancer death (REID), and protects against uncertainties in risks projections using an assessment of 95% confidence intervals after propagating the error from all model factors (environment and organ exposure, risk coefficients, dose-rate modifiers, and quality factors). Because there are potentially significant late mortality risks from diseases of the circulatory system and central nervous system (CNS) which are less well defined than cancer risks, the cancer REID limit is not necessarily conservative. In this report, we discuss estimates of lifetime risks from space radiation and new estimates of model uncertainties are described. The key updates to the NASA risk projection model are: 1) Revised values for low LET risk coefficients for tissue specific cancer incidence, with incidence rates transported to an average U.S. population to estimate the probability of Risk of Exposure Induced Cancer (REIC) and REID. 2) An analysis of smoking attributable cancer risks for never-smokers that shows significantly reduced lung cancer risk as well as overall cancer risks from radiation compared to risk estimated for the average U.S. population. 3) Derivation of track structure based quality functions depends on particle fluence, charge number, Z and kinetic energy, E. 4) The assignment of a smaller maximum in quality function for leukemia than for solid cancers. 5) The use of the ICRP tissue weights is shown to over-estimate cancer risks from SPEs by a factor of 2 or more. Summing cancer risks for each tissue is recommended as a more accurate approach to estimate SPE cancer risks. 6) Additional considerations on circulatory and CNS disease risks. Our analysis shows that an individual s history of smoking exposure has a larger impact on GCR risk estimates than amounts of radiation shielding or age at exposure (amongst adults). Risks for never-smokers compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for never-smokers, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity and esophagus, and leukemia. The relative contribution of CNS risks to the overall space radiation detriment is potentially increased for never-smokers such as most astronauts. Problems in estimating risks for former smokers and the influence of second-hand smoke are discussed. Compared to the LET approximation, the new track structure derived radiation quality functions lead to a reduced risk for relativistic energy particles and increased risks for intermediate energy particles. Revised estimates for the number of safe days in space at solar minimum for heavy shielding conditions are described for never-smokers and the average U.S. population. Results show that missions to near Earth asteroids (NEA) or Mars violate NASA's radiation safety standards with the current levels of uncertainties. Greater improvements in risk estimates for never-smokers are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).
Chandra, Nastassya L; Soldan, Kate; Dangerfield, Ciara; Sile, Bersabeh; Duffell, Stephen; Talebi, Alireza; Choi, Yoon H; Hughes, Gwenda; Woodhall, Sarah C
2017-02-02
To inform mathematical modelling of the impact of chlamydia screening in England since 2000, a complete picture of chlamydia testing is needed. Monitoring and surveillance systems evolved between 2000 and 2012. Since 2012, data on publicly funded chlamydia tests and diagnoses have been collected nationally. However, gaps exist for earlier years. We collated available data on chlamydia testing and diagnosis rates among 15-44-year-olds by sex and age group for 2000-2012. Where data were unavailable, we applied data- and evidence-based assumptions to construct plausible minimum and maximum estimates and set bounds on uncertainty. There was a large range between estimates in years when datasets were less comprehensive (2000-2008); smaller ranges were seen hereafter. In 15-19-year-old women in 2000, the estimated diagnosis rate ranged between 891 and 2,489 diagnoses per 100,000 persons. Testing and diagnosis rates increased between 2000 and 2012 in women and men across all age groups using minimum or maximum estimates, with greatest increases seen among 15-24-year-olds. Our dataset can be used to parameterise and validate mathematical models and serve as a reference dataset to which trends in chlamydia-related complications can be compared. Our analysis highlights the complexities of combining monitoring and surveillance datasets. This article is copyright of The Authors, 2017.
Bettencourt da Silva, Ricardo J N
2016-04-01
The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Determination of output factors for small proton therapy fields.
Fontenot, Jonas D; Newhauser, Wayne D; Bloch, Charles; White, R Allen; Titt, Uwe; Starkschall, George
2007-02-01
Current protocols for the measurement of proton dose focus on measurements under reference conditions; methods for measuring dose under patient-specific conditions have not been standardized. In particular, it is unclear whether dose in patient-specific fields can be determined more reliably with or without the presence of the patient-specific range compensator. The aim of this study was to quantitatively assess the reliability of two methods for measuring dose per monitor unit (DIMU) values for small-field treatment portals: one with the range compensator and one without the range compensator. A Monte Carlo model of the Proton Therapy Center-Houston double-scattering nozzle was created, and estimates of D/MU values were obtained from 14 simulated treatments of a simple geometric patient model. Field-specific D/MU calibration measurements were simulated with a dosimeter in a water phantom with and without the range compensator. D/MU values from the simulated calibration measurements were compared with D/MU values from the corresponding treatment simulation in the patient model. To evaluate the reliability of the calibration measurements, six metrics and four figures of merit were defined to characterize accuracy, uncertainty, the standard deviations of accuracy and uncertainty, worst agreement, and maximum uncertainty. Measuring D/MU without the range compensator provided superior results for five of the six metrics and for all four figures of merit. The two techniques yielded different results primarily because of high-dose gradient regions introduced into the water phantom when the range compensator was present. Estimated uncertainties (approximately 1 mm) in the position of the dosimeter in these regions resulted in large uncertainties and high variability in D/MU values. When the range compensator was absent, these gradients were minimized and D/MU values were less sensitive to dosimeter positioning errors. We conclude that measuring D/MU without the range compensator present provides more reliable results than measuring it with the range compensator in place.
Development of a primary diffusion source of organic vapors for gas analyzer calibration
NASA Astrophysics Data System (ADS)
Lecuna, M.; Demichelis, A.; Sassi, G.; Sassi, M. P.
2018-03-01
The generation of reference mixtures of volatile organic compounds (VOCs) at trace levels (10 ppt-10 ppb) is a challenge for both environmental and clinical measurements. The calibration of gas analyzers for trace VOC measurements requires a stable and accurate source of the compound of interest. The dynamic preparation of gas mixtures by diffusion is a suitable method for fulfilling these requirements. The estimation of the uncertainty of the molar fraction of the VOC in the mixture is a key step in the metrological characterization of a dynamic generator. The performance of a dynamic generator was monitored over a wide range of operating conditions. The generation system was simulated by a model developed with computational fluid dynamics and validated against experimental data. The vapor pressure of the VOC was found to be one of the main contributors to the uncertainty of the diffusion rate and its influence at 10-70 kPa was analyzed and discussed. The air buoyancy effect and perturbations due to the weighing duration were studied. The gas carrier flow rate and the amount of liquid in the vial were found to play a role in limiting the diffusion rate. The results of sensitivity analyses were reported through an uncertainty budget for the diffusion rate. The roles of each influence quantity were discussed. A set of criteria to minimize the uncertainty contribution to the primary diffusion source (25 µg min-1) were estimated: carrier gas flow rate higher than 37.7 sml min-1, a maximum VOC liquid mass decrease in the vial of 4.8 g, a minimum residual mass of 1 g and vial weighing times of 1-3 min. With this procedure a limit uncertainty of 0.5% in the diffusion rate can be obtained for VOC mixtures at trace levels (10 ppt-10 ppb), making the developed diffusion vials a primary diffusion source with potential to become a new reference material for trace VOC analysis.
Variability in Temperature-Related Mortality Projections under Climate Change
Benmarhnia, Tarik; Sottile, Marie-France; Plante, Céline; Brand, Allan; Casati, Barbara; Fournier, Michel
2014-01-01
Background: Most studies that have assessed impacts on mortality of future temperature increases have relied on a small number of simulations and have not addressed the variability and sources of uncertainty in their mortality projections. Objectives: We assessed the variability of temperature projections and dependent future mortality distributions, using a large panel of temperature simulations based on different climate models and emission scenarios. Methods: We used historical data from 1990 through 2007 for Montreal, Quebec, Canada, and Poisson regression models to estimate relative risks (RR) for daily nonaccidental mortality in association with three different daily temperature metrics (mean, minimum, and maximum temperature) during June through August. To estimate future numbers of deaths attributable to ambient temperatures and the uncertainty of the estimates, we used 32 different simulations of daily temperatures for June–August 2020–2037 derived from three global climate models (GCMs) and a Canadian regional climate model with three sets of RRs (one based on the observed historical data, and two on bootstrap samples that generated the 95% CI of the attributable number (AN) of deaths). We then used analysis of covariance to evaluate the influence of the simulation, the projected year, and the sets of RRs used to derive the attributable numbers of deaths. Results: We found that < 1% of the variability in the distributions of simulated temperature for June–August of 2020–2037 was explained by differences among the simulations. Estimated ANs for 2020–2037 ranged from 34 to 174 per summer (i.e., June–August). Most of the variability in mortality projections (38%) was related to the temperature–mortality RR used to estimate the ANs. Conclusions: The choice of the RR estimate for the association between temperature and mortality may be important to reduce uncertainty in mortality projections. Citation: Benmarhnia T, Sottile MF, Plante C, Brand A, Casati B, Fournier M, Smargiassi A. 2014. Variability in temperature-related mortality projections under climate change. Environ Health Perspect 122:1293–1298; http://dx.doi.org/10.1289/ehp.1306954 PMID:25036003
NASA Technical Reports Server (NTRS)
da Silva, Arlindo; Redder, Christopher
2010-01-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
NASA Astrophysics Data System (ADS)
da Silva, A.; Redder, C. R.
2010-12-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models
NASA Astrophysics Data System (ADS)
Styron, R. H.; Hetland, E. A.
2014-12-01
Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).
Assessing the impact of climate and land use changes on extreme floods in a large tropical catchment
NASA Astrophysics Data System (ADS)
Jothityangkoon, Chatchai; Hirunteeyakul, Chow; Boonrawd, Kowit; Sivapalan, Murugesu
2013-05-01
In the wake of the recent catastrophic floods in Thailand, there is considerable concern about the safety of large dams designed and built some 50 years ago. In this paper a distributed rainfall-runoff model appropriate for extreme flood conditions is used to generate revised estimates of the Probable Maximum Flood (PMF) for the Upper Ping River catchment (area 26,386 km2) in northern Thailand, upstream of location of the large Bhumipol Dam. The model has two components: a continuous water balance model based on a configuration of parameters estimated from climate, soil and vegetation data and a distributed flood routing model based on non-linear storage-discharge relationships of the river network under extreme flood conditions. The model is implemented under several alternative scenarios regarding the Probable Maximum Precipitation (PMP) estimates and is also used to estimate the potential effects of both climate change and land use and land cover changes on the extreme floods. These new estimates are compared against estimates using other hydrological models, including the application of the original prediction methods under current conditions. Model simulations and sensitivity analyses indicate that a reasonable Probable Maximum Flood (PMF) at the dam site is 6311 m3/s, which is only slightly higher than the original design flood of 6000 m3/s. As part of an uncertainty assessment, the estimated PMF is sensitive to the design method, input PMP, land use changes and the floodplain inundation effect. The increase of PMP depth by 5% can cause a 7.5% increase in PMF. Deforestation by 10%, 20%, 30% can result in PMF increases of 3.1%, 6.2%, 9.2%, respectively. The modest increase of the estimated PMF (to just 6311 m3/s) in spite of these changes is due to the factoring of the hydraulic effects of trees and buildings on the floodplain as the flood situation changes from normal floods to extreme floods, when over-bank flows may be the dominant flooding process, leading to a substantial reduction in the PMF estimates.
Process for estimating likelihood and confidence in post detonation nuclear forensics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.; Craft, Charles M.
2014-07-01
Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varma, Amit H.; Seo, Jungil; Coleman, Justin Leigh
2015-11-01
Seismic probabilistic risk assessment (SPRA) methods and approaches at nuclear power plants (NPP) were first developed in the 1970s and aspects of them have matured over time as they were applied and incrementally improved. SPRA provides information on risk and risk insights and allows for some accounting for uncertainty and variability. As a result, SPRA is now used as an important basis for risk-informed decision making for both new and operating NPPs in the US and in an increasing number of countries globally. SPRAs are intended to provide best estimates of the various combinations of structural and equipment failures thatmore » can lead to a seismic induced core damage event. However, in some instances the current SPRA approach contains large uncertainties, and potentially masks other important events (for instance, it was not the seismic motions that caused the Fukushima core melt events, but the tsunami ingress into the facility). INL has an advanced SPRA research and development (R&D) activity that will identify areas in the calculation process that contain significant uncertainties. One current area of focus is the use of nonlinear soil-structure interaction (NLSSI) analysis methods to accurately capture: 1) nonlinear soil behavior and 2) gapping and sliding between the NPP and soil. The goal of this study is to compare numerical NLSSI analysis results with recorded earthquake ground motions at Fukushima Daichii (Great Tohuku Earthquake) and evaluate the sources of nonlinearity contributing to the observed reduction in peak acceleration. Comparisons are made using recorded data in the free-field (soil column with no structural influence) and recorded data on the NPP basemat (in-structure response). Results presented in this study should identify areas of focus for future R&D activities with the goal of minimizing uncertainty in SPRA calculations. This is not a validation activity since there are too many sources of uncertainty that a numerical analysis would need to consider (variability in soil material properties, structural material properties, etc.). Rather the report will determine if the NLSSI calculations are following similar trends observed in the recorded data (i.e. reductions in maximum acceleration between the free-field and basemat) Numerical NLSSI results presented show maximum accelerations between the free field and basemat were reduced the EW and NS directions. The maximum acceleration in the UD direction increased slightly. The largest reduction in maximum accelerations between the modeled free-field and the NPP basemat resulted in nearly 50% reduction. The observation in reduction of numerical maximum accelerations in the EW and NS directions follows the observed trend in the recorded data. The maximum reductions observed in these NLSSI studies were due to soil nonlinearities, not gapping and sliding (although additional R&D is needed to develop an appropriate approach to model gapping and sliding). This exploratory study highlights the need for additional R&D on developing: (i) improved modeling of soil nonlinearities (soil constitutive models that appropriately capture cyclic soil behavior), (ii) improved modeling of gapping and sliding at the soil-structure interface (to appropriately capture the dissipation of energy at this interface), and (iii) experimental laboratory test data to calibrate the items (i) and (ii).« less
NASA Astrophysics Data System (ADS)
Fischbach, J. R.; Lempert, R. J.; Molina-Perez, E.
2017-12-01
The U.S. Environmental Protection Agency (USEPA), together with state and local partners, develops watershed implementation plans designed to meet water quality standards. Climate uncertainty, along with uncertainty about future land use changes or the performance of water quality best management practices (BMPs), may make it difficult for these implementation plans to meet water quality goals. In this effort, we explored how decision making under deep uncertainty (DMDU) methods such as Robust Decision Making (RDM) could help USEPA and its partners develop implementation plans that are more robust to future uncertainty. The study focuses on one part of the Chesapeake Bay watershed, the Patuxent River, which is 2,479 sq km in area, highly urbanized, and has a rapidly growing population. We simulated the contribution of stormwater contaminants from the Patuxent to the overall Total Maximum Daily Load (TMDL) for the Chesapeake Bay under multiple scenarios reflecting climate and other uncertainties. Contaminants considered included nitrogen, phosphorus, and sediment loads. The assessment included a large set of scenario simulations using the USEPA Chesapeake Bay Program's Phase V watershed model. Uncertainties represented in the analysis included 18 downscaled climate projections (based on 6 general circulation models and 3 emissions pathways), 12 land use scenarios with different population projections and development patterns, and alternative assumptions about BMP performance standards and efficiencies associated with different suites of stormwater BMPs. Finally, we developed cost estimates for each of the performance standards and compared cost to TMDL performance as a key tradeoff for future water quality management decisions. In this talk, we describe how this research can help inform climate-related decision support at USEPA's Chesapeake Bay Program, and more generally how RDM and other DMDU methods can support improved water quality management under climate uncertainty.
GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.
2014-01-01
This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
Probabilistic tsunami hazard analysis: Multiple sources and global applications
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël; Parsons, Thomas E.; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-01-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Probabilistic Tsunami Hazard Analysis: Multiple Sources and Global Applications
NASA Astrophysics Data System (ADS)
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël.; Parsons, Tom; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-12-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Ashenafi, Michael S.; McDonald, Daniel G.; Vanek, Kenneth N.
2015-01-01
Beam scanning data collected on the tomotherapy linear accelerator using the TomoScanner water scanning system is primarily used to verify the golden beam profiles included in all Helical TomoTherapy treatment planning systems (TOMO TPSs). The user is not allowed to modify the beam profiles/parameters for beam modeling within the TOMO TPSs. The authors report the first feasibility study using the Blue Phantom Helix (BPH) as an alternative to the TomoScanner (TS) system. This work establishes a benchmark dataset using BPH for target commissioning and quality assurance (QA), and quantifies systematic uncertainties between TS and BPH. Reproducibility of scanning with BPH was tested by three experienced physicists taking five sets of measurements over a six‐month period. BPH provides several enhancements over TS, including a 3D scanning arm, which is able to acquire necessary beam‐data with one tank setup, a universal chamber mount, and the OmniPro software, which allows online data collection and analysis. Discrepancies between BPH and TS were estimated by acquiring datasets with each tank. In addition, data measured with BPH and TS was compared to the golden TOMO TPS beam data. The total systematic uncertainty, defined as the combination of scanning system and beam modeling uncertainties, was determined through numerical analysis and tabulated. OmniPro was used for all analysis to eliminate uncertainty due to different data processing algorithms. The setup reproducibility of BPH remained within 0.5 mm/0.5%. Comparing BPH, TS, and Golden TPS for PDDs beyond maximum depth, the total systematic uncertainties were within 1.4 mm/2.1%. Between BPH and TPS golden data, maximum differences in the field width and penumbra of in‐plane profiles were within 0.8 and 1.1 mm, respectively. Furthermore, in cross‐plane profiles, the field width differences increased at depth greater than 10 cm up to 2.5 mm, and maximum penumbra uncertainties were 5.6 mm and 4.6 mm from TS scanning system and TPS modeling, respectively. Use of BPH reduced measurement time by 1–2 hrs per session. The BPH has been assessed as an efficient, reproducible, and accurate scanning system capable of providing a reliable benchmark beam data. With this data, a physicist can utilize the BPH in a clinical setting with an understanding of the scan discrepancy that may be encountered while validating the TPS or during routine machine QA. Without the flexibility of modifying the TPS and without a golden beam dataset from the vendor or a TPS model generated from data collected with the BPH, this represents the best solution for current clinical use of the BPH. PACS number: 87.56.Fc
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2018-01-01
Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.
U.S. broiler housing ammonia emissions inventory
NASA Astrophysics Data System (ADS)
Gates, R. S.; Casey, K. D.; Wheeler, E. F.; Xin, H.; Pescatore, A. J.
Using recently published baseline ammonia emissions data for U.S. broiler chicken housing, we present a method of estimating their contribution to an annual ammonia budget that is different from that used by USEPA. Emission rate increases in a linear relationship with flock age from near zero at the start of the flock to a maximum at the end of the flock, 28-65 days later. Market weight of chickens raised for meat varies from "broilers" weighing about 2 kg to "roasters" weighing about 3 kg. Multiple flocks of birds are grown in a single house annually, with variable downtime to prepare the house between flocks. The method takes into account weight and number of chickens marketed. Uncertainty in baseline emissions estimates is used so that inventory estimates are provided with error estimates. The method also incorporates the condition of litter that birds are raised upon and the varying market weight of birds grown. Using 2003 USDA data on broiler production numbers, broiler housing is estimated to contribute 8.8-11.7 kT ammonia for new and built-up litter, respectively, in Kentucky and 240-324 kT ammonia for new and built-up litter, respectively, nationally. Results suggest that a 10% uncertainty in annual emission rate is expected for the market weight categories of broilers, heavy broilers, and roasters. A 27-47% reduction in annual housing emission rate is predicted if new rather than built-up litter were used for every flock. The estimating method can be adapted to other meat bird building emissions and future ammonia emission strategies, with suitable insertion of an age-dependent emission factor or slope into a predictive model equation. The method can be readily applied and is an alternative to that used by USEPA.
Assessing uncertainties in surface water security: An empirical multimodel approach
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.
2017-05-01
The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Maximum warming occurs about one decade after carbon dioxide emission
NASA Astrophysics Data System (ADS)
Ricke, K.; Caldeira, K.
2014-12-01
There has been a long tradition of estimating the amount of climate change that would result from various carbon dioxide emission or concentration scenarios but there has been relatively little quantitative analysis of how long it takes to feel the consequences of an individual carbon dioxide emission. Using conjoined results of recent carbon-cycle and physical-climate model intercomparison projects, we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6 to 30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. To characterize the carbon cycle uncertainty associated with the global temperature response to a carbon dioxide emission today, we use fits to the time series of carbon dioxide concentrations from a CO2-impulse response function model intercomparison project's 15 ensemble members (1). To characterize both the uncertainty in climate sensitivity and in the thermal inertia of the climate system, we use fits to the time series of global temperature change from the Coupled Model Intercomparison Project phase 5 (CMIP5; 2) abrupt4xco2 experiment's 20 ensemble's members separating the effects of each uncertainty factors using one of two simple physical models for each CMIP5 climate model. This yields 6,000 possible combinations of these three factors using a standard convolution integral approach. Our results indicate that benefits of avoided climate damage from avoided CO2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While the relevant time lags imposed by the climate system are substantially shorter than a human lifetime, they are substantially longer than the typical political election cycle, making the delay and its associated uncertainties both economically and politically significant. References: 1. Joos F et al. (2013) Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: a multi-model analysis. Atmos Chem Phys 13:2793-2825. 2. Taylor KE, Stouffer RJ, Meehl GA (2011) An Overview of CMIP5 and the Experiment Design. Bull Am Meteorol Soc 93:485-498.
A fresh look at the Last Glacial Maximum using Paleoclimate Data Assimilation
NASA Astrophysics Data System (ADS)
Malevich, S. B.; Tierney, J. E.; Hakim, G. J.; Tardif, R.
2017-12-01
Quantifying climate conditions during the Last Glacial Maximum ( 21ka) can help us to understand climate responses to forcing and climate states that are poorly represented in the instrumental record. Paleoclimate proxies may be used to estimate these climate conditions, but proxies are sparsely distributed and possess uncertainties from environmental and biogeochemical processes. Alternatively, climate model simulations provide a full-field view, but may predict unrealistic climate states or states not faithful to proxy records. Here, we use data assimilation - combining climate proxy records with a theoretical understanding from climate models - to produce field reconstructions of the LGM that leverage the information from both data and models. To date, data assimilation has mainly been used to produce reconstructions of climate fields through the last millennium. We expand this approach in order to produce a climate fields for the Last Glacial Maximum using an ensemble Kalman filter assimilation. Ensemble samples were formed from output from multiple models including CCSM3, CESM2.1, and HadCM3. These model simulations are combined with marine sediment proxies for upper ocean temperature (TEX86, UK'37, Mg/Ca and δ18O of foraminifera), utilizing forward models based on a newly developed suite of Bayesian proxy system models. We also incorporate age model and radiocarbon reservoir uncertainty into our reconstructions using Bayesian age modeling software. The resulting fields show familiar patterns based on comparison with previous proxy-based reconstructions, but additionally reveal novel patterns of large-scale shifts in ocean-atmosphere dynamics, as the surface temperature data inform upon atmospheric circulation and precipitation patterns.
NASA Astrophysics Data System (ADS)
Denissenkov, Pavel; Perdikakis, Georgios; Herwig, Falk; Schatz, Hendrik; Ritter, Christian; Pignatari, Marco; Jones, Samuel; Nikas, Stylianos; Spyrou, Artemis
2018-05-01
The first-peak s-process elements Rb, Sr, Y and Zr in the post-AGB star Sakurai's object (V4334 Sagittarii) have been proposed to be the result of i-process nucleosynthesis in a post-AGB very-late thermal pulse event. We estimate the nuclear physics uncertainties in the i-process model predictions to determine whether the remaining discrepancies with observations are significant and point to potential issues with the underlying astrophysical model. We find that the dominant source in the nuclear physics uncertainties are predictions of neutron capture rates on unstable neutron rich nuclei, which can have uncertainties of more than a factor 20 in the band of the i-process. We use a Monte Carlo variation of 52 neutron capture rates and a 1D multi-zone post-processing model for the i-process in Sakurai's object to determine the cumulative effect of these uncertainties on the final elemental abundance predictions. We find that the nuclear physics uncertainties are large and comparable to observational errors. Within these uncertainties the model predictions are consistent with observations. A correlation analysis of the results of our MC simulations reveals that the strongest impact on the predicted abundances of Rb, Sr, Y and Zr is made by the uncertainties in the (n, γ) reaction rates of 85Br, 86Br, 87Kr, 88Kr, 89Kr, 89Rb, 89Sr, and 92Sr. This conclusion is supported by a series of multi-zone simulations in which we increased and decreased to their maximum and minimum limits one or two reaction rates per run. We also show that simple and fast one-zone simulations should not be used instead of more realistic multi-zone stellar simulations for nuclear sensitivity and uncertainty studies of convective–reactive processes. Our findings apply more generally to any i-process site with similar neutron exposure, such as rapidly accreting white dwarfs with near-solar metallicities.
Parameter uncertainty analysis of a biokinetic model of caesium
Li, W. B.; Klein, W.; Blanchardon, Eric; ...
2014-04-17
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less
NASA Astrophysics Data System (ADS)
Witte, Jacquelyn C.; Thompson, Anne M.; Smit, Herman G. J.; Vömel, Holger; Posny, Françoise; Stübi, Rene
2018-03-01
Reprocessed ozonesonde data from eight SHADOZ (Southern Hemisphere ADditional OZonesondes) sites have been used to derive the first analysis of uncertainty estimates for both profile and total column ozone (TCO). The ozone uncertainty is a composite of the uncertainties of the individual terms in the ozone partial pressure (PO3) equation, those being the ozone sensor current, background current, internal pump temperature, pump efficiency factors, conversion efficiency, and flow rate. Overall, PO3 uncertainties (ΔPO3) are within 15% and peak around the tropopause (15 ± 3 km) where ozone is a minimum and ΔPO3 approaches the measured signal. The uncertainty in the background and sensor currents dominates the overall ΔPO3 in the troposphere including the tropopause region, while the uncertainties in the conversion efficiency and flow rate dominate in the stratosphere. Seasonally, ΔPO3 is generally a maximum in the March-May, with the exception of SHADOZ sites in Asia, for which the highest ΔPO3 occurs in September-February. As a first approach, we calculate sonde TCO uncertainty (ΔTCO) by integrating the profile ΔPO3 and adding the ozone residual uncertainty, derived from the McPeters and Labow (2012, doi:10.1029/2011JD017006) 1σ ozone mixing ratios. Overall, ΔTCO are within ±15 Dobson units (DU), representing 5-6% of the TCO. Total Ozone Mapping Spectrometer and Ozone Monitoring Instrument (TOMS and OMI) satellite overpasses are generally within the sonde ΔTCO. However, there is a discontinuity between TOMS v8.6 (1998 to September 2004) and OMI (October 2004-2016) TCO on the order of 10 DU that accounts for the significant 16 DU overall difference observed between sonde and TOMS. By comparison, the sonde-OMI absolute difference for the eight stations is only 4 DU.
Cost Recommendation under Uncertainty in IQWiG's Efficiency Frontier Framework.
Corro Ramos, Isaac; Lhachimi, Stefan K; Gerber-Grote, Andreas; Al, Maiwenn J
2017-02-01
The National Institute for Quality and Efficiency in Health Care (IQWiG) employs an efficiency frontier (EF) framework to facilitate setting maximum reimbursable prices for new interventions. Probabilistic sensitivity analysis (PSA) is used when yes/no reimbursement decisions are sought based on a fixed threshold. In the IQWiG framework, an additional layer of complexity arises as the EF itself may vary its shape in each PSA iteration, and thus the willingness-to-pay, indicated by the EF segments, may vary. To explore the practical problems arising when, within the EF approach, maximum reimbursable prices for new interventions are sought through PSA. When the EF is varied in a PSA, cost recommendations for new interventions may be determined by the mean or the median of the distances between each intervention's point estimate and each EF. Implications of using these metrics were explored in a simulation study based on the model used by IQWiG to assess the cost-effectiveness of 4 antidepressants. Depending on the metric used, cost recommendations can be contradictory. Recommendations based on the mean can also be inconsistent. Results (median) suggested that costs of duloxetine, venlafaxine, mirtazapine, and bupropion should be decreased by €131, €29, €12, and €99, respectively. These recommendations were implemented and the analysis repeated. New results suggested keeping the costs as they were. The percentage of acceptable PSA outcomes increased 41% on average, and the uncertainty associated to the net health benefit was significantly reduced. The median of the distances between every intervention outcome and every EF is a good proxy for the cost recommendation that would be given should the EF be fixed. Adjusting costs according to the median increased the probability of acceptance and reduced the uncertainty around the net health benefit distribution, resulting in a reduced uncertainty for decision makers.
Analyzing ROC curves using the effective set-size model
NASA Astrophysics Data System (ADS)
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.
A robust method to forecast volcanic ash clouds
Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin
2012-01-01
Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
Incubation period of ebola hemorrhagic virus subtype zaire.
Eichner, Martin; Dowell, Scott F; Firese, Nina
2011-06-01
Ebola hemorrhagic fever has killed over 1300 people, mostly in equatorial Africa. There is still uncertainty about the natural reservoir of the virus and about some of the factors involved in disease transmission. Until now, a maximum incubation period of 21 days has been assumed. We analyzed data collected during the Ebola outbreak (subtype Zaire) in Kikwit, Democratic Republic of the Congo, in 1995 using maximum likelihood inference and assuming a log-normally distributed incubation period. The mean incubation period was estimated to be 12.7 days (standard deviation 4.31 days), indicating that about 4.1% of patients may have incubation periods longer than 21 days. If the risk of new cases is to be reduced to 1% then 25 days should be used when investigating the source of an outbreak, when determining the duration of surveillance for contacts, and when declaring the end of an outbreak.
Trilateral interlaboratory with SSL (WLEDi) luminaire
NASA Astrophysics Data System (ADS)
Burini Junior, E. C.; Santos, E. R.; Assaf, L. O.
2018-03-01
The IEE/USP laboratory and two others, all belonging to RBLE (Brazilian Network of Test Laboratories) participated in a trilateral comparison performed from measurement independently of participants interaction. The results from electric and photometric measurements carried out on samples of Solid State Lighting - SSL, Inorganic White Light Emitting Diode (WLEDi) luminaires by three accredited laboratories were considered in order to point out mutual deviations and to verify the confidence in a bilateral comparison. The first analysis revealed a maximum deviation of 4.2 % between the luminous intensity attributed by one laboratory and the arithmetic mean value from three laboratories. The largest standard uncertainty value of 1.9 % was estimated for Total Harmonic Distortion of electric current THDi and the lowest value, 0.4 %, to the luminous flux. The extreme deviation for one parameter results was 7.2 % at maximum luminous intensity and the lowest was 1.7 % for luminous flux.
The neural representation of unexpected uncertainty during value-based decision making.
Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P
2013-07-10
Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Using cost-benefit concepts in design floods improves communication of uncertainty
NASA Astrophysics Data System (ADS)
Ganora, Daniele; Botto, Anna; Laio, Francesco; Claps, Pierluigi
2017-04-01
Flood frequency analysis, i.e. the study of the relationships between the magnitude and the rarity of high flows in a river, is the usual procedure adopted to assess flood hazard, preliminary to the plan/design of flood protection measures. It grounds on the fit of a probability distribution to the peak discharge values recorded in gauging stations and the final estimates over a region are thus affected by uncertainty, due to the limited sample availability and of the possible alternatives in terms of the probabilistic model and the parameter estimation methods used. In the last decade, the scientific community dealt with this issue by developing a number of methods to quantify such uncertainty components. Usually, uncertainty is visually represented through confidence bands, which are easy to understand, but are not yet demonstrated to be useful for design purposes: they usually disorient decision makers, as the design flood is no longer univocally defined, making the decision process undetermined. These considerations motivated the development of the uncertainty-compliant design flood estimator (UNCODE) procedure (Botto et al., 2014) that allows one to select meaningful flood design values accounting for the associated uncertainty by considering additional constraints based on cost-benefit criteria. This method suggests an explicit multiplication factor that corrects the traditional (without uncertainty) design flood estimates to incorporate the effects of uncertainty in the estimate at the same safety level. Even though the UNCODE method was developed for design purposes, it can represent a powerful and robust tool to help clarifying the effects of the uncertainty in statistical estimation. As the process produces increased design flood estimates, this outcome demonstrates how uncertainty leads to more expensive flood protection measures, or insufficiency of current defenses. Moreover, the UNCODE approach can be used to assess the "value" of data, as the costs of flood prevention can get down by reducing uncertainty with longer observed flood records. As the multiplication factor is dimensionless, some examples of application provided show how this approach allows simple comparisons of the effects of uncertainty in different catchments, helping to build ranking procedures for planning purposes. REFERENCES Botto, A., Ganora, D., Laio, F., and Claps, P.: Uncertainty compliant design flood estimation, Water Resources Research, 50, doi:10.1002/2013WR014981, 2014.
Schwartz, D.P.; Joyner, W.B.; Stein, R.S.; Brown, R.D.; McGarr, A.F.; Hickman, S.H.; Bakun, W.H.
1996-01-01
Summary -- The U.S. Geological Survey was requested by the U.S. Department of the Interior to review the design values and the issue of reservoir-induced seismicity for a concrete gravity dam near the site of the previously-proposed Auburn Dam in the western foothills of the Sierra Nevada, central California. The dam is being planned as a flood-control-only dam with the possibility of conversion to a permanent water-storage facility. As a basis for planning studies the U.S. Army Corps of Engineers is using the same design values approved by the Secretary of the Interior in 1979 for the original Auburn Dam. These values were a maximum displacement of 9 inches on a fault intersecting the dam foundation, a maximum earthquake at the site of magnitude 6.5, a peak horizontal acceleration of 0.64 g, and a peak vertical acceleration of 0.39 g. In light of geological and seismological investigations conducted in the western Sierran foothills since 1979 and advances in the understanding of how earthquakes are caused and how faults behave, we have developed the following conclusions and recommendations: Maximum Displacement. Neither the pre-1979 nor the recent observations of faults in the Sierran foothills precisely define the maximum displacement per event on a fault intersecting the dam foundation. Available field data and our current understanding of surface faulting indicate a range of values for the maximum displacement. This may require the consideration of a design value larger than 9 inches. We recommend reevaluation of the design displacement using current seismic hazard methods that incorporate uncertainty into the estimate of this design value. Maximum Earthquake Magnitude. There are no data to indicate that a significant change is necessary in the use of an M 6.5 maximum earthquake to estimate design ground motions at the dam site. However, there is a basis for estimating a range of maximum magnitudes using recent field information and new statistical fault relations. We recommend reevaluating the maximum earthquake magnitude using current seismic hazard methodology. Design Ground Motions. A large number of strong-motion records have been acquired and significant advances in understanding of ground motion have been achieved since the original evaluations. The design value for peak horizontal acceleration (0.64 g) is larger than the median of one recent study and smaller than the median value of another. The value for peak vertical acceleration (0.39 g) is somewhat smaller than median values of two recent studies. We recommend a reevaluation of the design ground motions that takes into account new ground motion data with particular attention to rock sites at small source distances. Reservoir-Induced Seismicity. The potential for reservoir-induced seismicity must be considered for the Auburn Darn project. A reservoir-induced earthquake is not expected to be larger than the maximum naturally occurring earthquake. However, the probability of an earthquake may be enhanced by reservoir impoundment. A flood-control-only project may involve a lower probability of significant induced seismicity than a multipurpose water-storage dam. There is a need to better understand and quantify the likelihood of this hazard. A methodology should be developed to quantify the potential for reservoir induced seismicity using seismicity data from the Sierran foothills, new worldwide observations of induced and triggered seismicity, and current understanding of the earthquake process. Reevaluation of Design Parameters. The reevaluation of the maximum displacement, maximum magnitude earthquake, and design ground motions can be made using available field observations from the Sierran foothills, updated statistical relations for faulting and ground motions, and current computational seismic hazard methodologies that incorporate uncertainty into the analysis. The reevaluation does not require significant new geological field studies.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Applications of flood depth from rapid post-event footprint generation
NASA Astrophysics Data System (ADS)
Booth, Naomi; Millinship, Ian
2015-04-01
Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.
NASA Technical Reports Server (NTRS)
Wilson, John W.; Nealy, John E.; Schimmerling, Walter; Cucinotta, Francis A.; Wood, James S.
1993-01-01
Some consequences of uncertainties in radiobiological risk due to galactic cosmic ray (GCR) exposure are analyzed for their effect on engineering designs for the first lunar outpost and a mission to explore Mars. This report presents the plausible effect of biological uncertainties, the design changes necessary to reduce the uncertainties to acceptable levels for a safe mission, and an evaluation of the mission redesign cost. Estimates of the amount of shield mass required to compensate for radiobiological uncertainty are given for a simplified vehicle and habitat. The additional amount of shield mass required to provide a safety factor for uncertainty compensation is calculated from the expected response to GCR exposure. The amount of shield mass greatly increases in the estimated range of biological uncertainty, thus, escalating the estimated cost of the mission. The estimates are used as a quantitative example for the cost-effectiveness of research in radiation biophysics and radiation physics.
Generalized uncertainty principle and the maximum mass of ideal white dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashidi, Reza, E-mail: reza.rashidi@srttu.edu
The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.
NASA Astrophysics Data System (ADS)
Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.
2012-04-01
Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Verónica M; Savitz, David A; Bartell, Scott M
2016-01-01
Uncertainty in exposure estimates from models can result in exposure measurement error and can potentially affect the validity of epidemiological studies. We recently used a suite of environmental models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations and assess the association with preeclampsia from 1990 through 2006 for the C8 Health Project participants. The aims of the current study are to evaluate impact of uncertainty in estimated PFOA drinking-water concentrations on estimated serum concentrations and their reported epidemiological association with preeclampsia. For each individual public water district, we used Monte Carlo simulations to vary the year-by-year PFOA drinking-water concentration by randomly sampling from lognormal distributions for random error in the yearly public water district PFOA concentrations, systematic error specific to each water district, and global systematic error in the release assessment (using the estimated concentrations from the original fate and transport model as medians and a range of 2-, 5-, and 10-fold uncertainty). Uncertainty in PFOA water concentrations could cause major changes in estimated serum PFOA concentrations among participants. However, there is relatively little impact on the resulting epidemiological association in our simulations. The contribution of exposure uncertainty to the total uncertainty (including regression parameter variance) ranged from 5% to 31%, and bias was negligible. We found that correlated exposure uncertainty can substantially change estimated PFOA serum concentrations, but results in only minor impacts on the epidemiological association between PFOA and preeclampsia. Avanasi R, Shin HM, Vieira VM, Savitz DA, Bartell SM. 2016. Impact of exposure uncertainty on the association between perfluorooctanoate and preeclampsia in the C8 Health Project population. Environ Health Perspect 124:126-132; http://dx.doi.org/10.1289/ehp.1409044.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
NASA Astrophysics Data System (ADS)
Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.
2016-12-01
Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
Allowable levels of take for the trade in Nearctic songbirds.
Johnson, Fred A; Walters, Matthew A H; Boomer, G Scott
2012-06-01
The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making-periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (> or = 5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take of Nearctic songbirds in other Latin American and Caribbean countries ultimately must be considered in assessing population-level impacts.
Allowable levels of take for the trade in Nearctic songbirds
Johnson, Fred A.; Walters, Matthew A.H.; Boomer, G. Scott
2012-01-01
The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (≥5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take of Nearctic songbirds in other Latin American and Caribbean countries ultimately must be considered in assessing population-level impacts.
Implications of Uncertainty in Fossil Fuel Emissions for Terrestrial Ecosystem Modeling
NASA Astrophysics Data System (ADS)
King, A. W.; Ricciuto, D. M.; Mao, J.; Andres, R. J.
2017-12-01
Given observations of the increase in atmospheric CO2, estimates of anthropogenic emissions and models of oceanic CO2 uptake, one can estimate net global CO2 exchange between the atmosphere and terrestrial ecosystems as the residual of the balanced global carbon budget. Estimates from the Global Carbon Project 2016 show that terrestrial ecosystems are a growing sink for atmospheric CO2 (averaging 2.12 Gt C y-1 for the period 1959-2015 with a growth rate of 0.03 Gt C y-1 per year) but with considerable year-to-year variability (standard deviation of 1.07 Gt C y-1). Within the uncertainty of the observations, emissions estimates and ocean modeling, this residual calculation is a robust estimate of a global terrestrial sink for CO2. A task of terrestrial ecosystem science is to explain the trend and variability in this estimate. However, "within the uncertainty" is an important caveat. The uncertainty (2σ; 95% confidence interval) in fossil fuel emissions is 8.4% (±0.8 Gt C in 2015). Combined with uncertainty in other carbon budget components, the 2σ uncertainty surrounding the global net terrestrial ecosystem CO2 exchange is ±1.6 Gt C y-1. Ignoring the uncertainty, the estimate of a general terrestrial sink includes 2 years (1987 and 1998) in which terrestrial ecosystems are a small source of CO2 to the atmosphere. However, with 2σ uncertainty, terrestrial ecosystems may have been a source in as many as 18 years. We examine how well global terrestrial biosphere models simulate the trend and interannual variability of the global-budget estimate of the terrestrial sink within the context of this uncertainty (e.g., which models fall outside the 2σ uncertainty and in what years). Models are generally capable of reproducing the trend in net terrestrial exchange, but are less able to capture interannual variability and often fall outside the 2σ uncertainty. The trend in the residual carbon budget estimate is primarily associated with the increase in atmospheric CO2, while interannual variation is related to variations in global land-surface temperature with weaker sinks in warmer years. We examine whether these relationships are reproduced in models. Their absence might explain weaknesses in model simulations or in the reconstruction of historical climate used as drivers in model intercomparison projects (MIPs).
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Uncertainty and inference in the world of paleoecological data
NASA Astrophysics Data System (ADS)
McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.
2017-12-01
Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a coarseness in statistical models of taphonomic process. Each of these features provides useful opportunities for statisticians and data-generating researchers to assess what we know about the signal and the noise in paleo data and to improve inference about past changes in ecosystem state.
Connotative Meaning of Military Chat Communications
2009-09-01
humans recognize connotative cues expressing uncertainty, perception of personal threat, and urgency; formulate linguistic and non-linguistic means for...built a matrix of speech “cues” representative of uncertainty, perception of personal threat, and urgency, but also applied maximum entropy analysis...results. This project proposed to: (1) conduct a study of how humans recognize connotative cues expressing uncertainty, perception of personal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong
Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less
Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia
2018-05-17
Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.
Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng
2018-05-03
Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2 = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)
Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally
2014-06-12
This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
Modelling of interaction of the large disrupted meteoroid with the Earth atmosphere
NASA Astrophysics Data System (ADS)
Brykina, Irina G.
2018-05-01
The model of atmospheric fragmentation of large meteoroids to the cloud of fragments is proposed. The comparison with similar models used in the literature is made. The approximate analytical solution of meteor physics equations is obtained for the mass loss of the disrupted meteoroid, the energy deposition and for the light curve normalized to the maximum brightness. This solution is applied to modelling of interaction of the Chelyabinsk meteoroid with the atmosphere. The influence of uncertainty of initial parameters of the meteoroid on characteristics of its interaction with the atmosphere is estimated. Comparison of the analytical solution with the observational data is made.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
Brown, Sandra [University of Illinois, Urbana, IL (USA); Winrock International, Arlington, Virginia (USA); Gaston, Greg [University of Illinois, Urbana, IL (USA); Oregon State University; Beaty, T. W. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, L. M. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA)
2001-01-01
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10E6 km2 of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea, Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo,Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO (TM geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.
Evaluation of harvest and information needs for North American sea ducks
Koneff, Mark D.; Zimmerman, Guthrie S.; Dwyer, Chris P.; Fleming, Kathleen K.; Padding, Paul I.; Devers, Patrick K.; Johnson, Fred A.; Runge, Michael C.; Roberts, Anthony J.
2017-01-01
Wildlife managers routinely seek to establish sustainable limits of sport harvest or other regulated forms of take while confronted with considerable uncertainty. A growing body of ecological research focuses on methods to describe and account for uncertainty in management decision-making and to prioritize research and monitoring investments to reduce the most influential uncertainties. We used simulation methods incorporating measures of demographic uncertainty to evaluate risk of overharvest and prioritize information needs for North American sea ducks (Tribe Mergini). Sea ducks are popular game birds in North America, yet they are poorly monitored and their population dynamics are poorly understood relative to other North American waterfowl. There have been few attempts to assess the sustainability of harvest of North American sea ducks, and no formal harvest strategy exists in the U.S. or Canada to guide management. The popularity of sea duck hunting, extended hunting opportunity for some populations (i.e., special seasons and/or bag limits), and population declines have led to concern about potential overharvest. We used Monte Carlo simulation to contrast estimates of allowable harvest and observed harvest and assess risk of overharvest for 7 populations of North American sea ducks: the American subspecies of common eider (Somateria mollissima dresseri), eastern and western populations of black scoter (Melanitta americana) and surf scoter (M. perspicillata), and continental populations of white-winged scoter (M. fusca) and long-tailed duck (Clangula hyemalis). We combined information from empirical studies and the opinions of experts through formal elicitation to create probability distributions reflecting uncertainty in the individual demographic parameters used in this assessment. Estimates of maximum growth (rmax), and therefore of allowable harvest, were highly uncertain for all populations. Long-tailed duck and American common eider appeared to be at high risk of overharvest (i.e., observed harvest < allowable harvest in 5–7% and 19–26% of simulations, respectively depending on the functional form of density dependence), whereas the other populations appeared to be at moderate risk to low risk (observed harvest < allowable harvest in 22–68% of simulations, again conditional on the form of density dependence). We also evaluated the sensitivity of the difference between allowable and observed harvest estimates to uncertainty in individual demographic parameters to prioritize information needs. We found that uncertainty in overall fecundity had more influence on comparisons of allowable and observed harvest than adult survival or observed harvest for all species except long-tailed duck. Although adult survival was characterized by less uncertainty than individual components of fecundity, it was identified as a high priority information need given the sensitivity of growth rate and allowable harvest to this parameter. Uncertainty about population size was influential in the comparison of observed and allowable harvest for 5 of the 6 populations where it factored into the assessment. While this assessment highlights a high degree of uncertainty in allowable harvest, it provides a framework for integration of improved data from future research and monitoring. It could also serve as the basis for harvest strategy development as management objectives and regulatory alternatives are specified by the management community.
Evaluation of harvest and information needs for North American sea ducks.
Koneff, Mark D; Zimmerman, Guthrie S; Dwyer, Chris P; Fleming, Kathleen K; Padding, Paul I; Devers, Patrick K; Johnson, Fred A; Runge, Michael C; Roberts, Anthony J
2017-01-01
Wildlife managers routinely seek to establish sustainable limits of sport harvest or other regulated forms of take while confronted with considerable uncertainty. A growing body of ecological research focuses on methods to describe and account for uncertainty in management decision-making and to prioritize research and monitoring investments to reduce the most influential uncertainties. We used simulation methods incorporating measures of demographic uncertainty to evaluate risk of overharvest and prioritize information needs for North American sea ducks (Tribe Mergini). Sea ducks are popular game birds in North America, yet they are poorly monitored and their population dynamics are poorly understood relative to other North American waterfowl. There have been few attempts to assess the sustainability of harvest of North American sea ducks, and no formal harvest strategy exists in the U.S. or Canada to guide management. The popularity of sea duck hunting, extended hunting opportunity for some populations (i.e., special seasons and/or bag limits), and population declines have led to concern about potential overharvest. We used Monte Carlo simulation to contrast estimates of allowable harvest and observed harvest and assess risk of overharvest for 7 populations of North American sea ducks: the American subspecies of common eider (Somateria mollissima dresseri), eastern and western populations of black scoter (Melanitta americana) and surf scoter (M. perspicillata), and continental populations of white-winged scoter (M. fusca) and long-tailed duck (Clangula hyemalis). We combined information from empirical studies and the opinions of experts through formal elicitation to create probability distributions reflecting uncertainty in the individual demographic parameters used in this assessment. Estimates of maximum growth (rmax), and therefore of allowable harvest, were highly uncertain for all populations. Long-tailed duck and American common eider appeared to be at high risk of overharvest (i.e., observed harvest < allowable harvest in 5-7% and 19-26% of simulations, respectively depending on the functional form of density dependence), whereas the other populations appeared to be at moderate risk to low risk (observed harvest < allowable harvest in 22-68% of simulations, again conditional on the form of density dependence). We also evaluated the sensitivity of the difference between allowable and observed harvest estimates to uncertainty in individual demographic parameters to prioritize information needs. We found that uncertainty in overall fecundity had more influence on comparisons of allowable and observed harvest than adult survival or observed harvest for all species except long-tailed duck. Although adult survival was characterized by less uncertainty than individual components of fecundity, it was identified as a high priority information need given the sensitivity of growth rate and allowable harvest to this parameter. Uncertainty about population size was influential in the comparison of observed and allowable harvest for 5 of the 6 populations where it factored into the assessment. While this assessment highlights a high degree of uncertainty in allowable harvest, it provides a framework for integration of improved data from future research and monitoring. It could also serve as the basis for harvest strategy development as management objectives and regulatory alternatives are specified by the management community.
Evaluation of harvest and information needs for North American sea ducks
Dwyer, Chris P.; Fleming, Kathleen K.; Padding, Paul I.; Devers, Patrick K.; Johnson, Fred A.; Runge, Michael C.; Roberts, Anthony J.
2017-01-01
Wildlife managers routinely seek to establish sustainable limits of sport harvest or other regulated forms of take while confronted with considerable uncertainty. A growing body of ecological research focuses on methods to describe and account for uncertainty in management decision-making and to prioritize research and monitoring investments to reduce the most influential uncertainties. We used simulation methods incorporating measures of demographic uncertainty to evaluate risk of overharvest and prioritize information needs for North American sea ducks (Tribe Mergini). Sea ducks are popular game birds in North America, yet they are poorly monitored and their population dynamics are poorly understood relative to other North American waterfowl. There have been few attempts to assess the sustainability of harvest of North American sea ducks, and no formal harvest strategy exists in the U.S. or Canada to guide management. The popularity of sea duck hunting, extended hunting opportunity for some populations (i.e., special seasons and/or bag limits), and population declines have led to concern about potential overharvest. We used Monte Carlo simulation to contrast estimates of allowable harvest and observed harvest and assess risk of overharvest for 7 populations of North American sea ducks: the American subspecies of common eider (Somateria mollissima dresseri), eastern and western populations of black scoter (Melanitta americana) and surf scoter (M. perspicillata), and continental populations of white-winged scoter (M. fusca) and long-tailed duck (Clangula hyemalis). We combined information from empirical studies and the opinions of experts through formal elicitation to create probability distributions reflecting uncertainty in the individual demographic parameters used in this assessment. Estimates of maximum growth (rmax), and therefore of allowable harvest, were highly uncertain for all populations. Long-tailed duck and American common eider appeared to be at high risk of overharvest (i.e., observed harvest < allowable harvest in 5–7% and 19–26% of simulations, respectively depending on the functional form of density dependence), whereas the other populations appeared to be at moderate risk to low risk (observed harvest < allowable harvest in 22–68% of simulations, again conditional on the form of density dependence). We also evaluated the sensitivity of the difference between allowable and observed harvest estimates to uncertainty in individual demographic parameters to prioritize information needs. We found that uncertainty in overall fecundity had more influence on comparisons of allowable and observed harvest than adult survival or observed harvest for all species except long-tailed duck. Although adult survival was characterized by less uncertainty than individual components of fecundity, it was identified as a high priority information need given the sensitivity of growth rate and allowable harvest to this parameter. Uncertainty about population size was influential in the comparison of observed and allowable harvest for 5 of the 6 populations where it factored into the assessment. While this assessment highlights a high degree of uncertainty in allowable harvest, it provides a framework for integration of improved data from future research and monitoring. It could also serve as the basis for harvest strategy development as management objectives and regulatory alternatives are specified by the management community. PMID:28419113
Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?
Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia
2014-01-01
Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896
Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien
2018-01-01
In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.
Resch, Stephen
2018-01-01
Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964
NASA Astrophysics Data System (ADS)
Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.
2017-12-01
The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.
Learning toward practical head pose estimation
NASA Astrophysics Data System (ADS)
Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin
2017-08-01
Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.
Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.
2017-12-01
A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
NASA Astrophysics Data System (ADS)
Levandowski, W. B.; Walsh, F. R. R.; Yeck, W.
2016-12-01
Quantifying the increase in pore-fluid pressure necessary to cause slip on specific fault planes can provide actionable information for stakeholders to potentially mitigate hazard. Although the M5.8 Pawnee earthquake occurred on a previously unmapped fault, we can retrospectively estimate the pore-pressure perturbation responsible for this event. We first estimate the normalized local stress tensor by inverting focal mechanisms surrounding the Pawnee Fault. Faults are generally well oriented for slip, with instabilities averaging 96% of maximum. Next, with an estimate of the weight of local overburden we solve for the pore pressure needed at the hypocenters. Specific to the Pawnee fault, we find that hypocentral pressure 43-104% of hydrostatic (accounting for uncertainties in all relevant parameters) would have been sufficient to cause slip. The dominant source of uncertainty is the pressure on the fault prior to fluid injection. Importantly, we find that lower pre-injection pressure requires lower resultant pressure to cause slip, decreasing from a regional average of 30% above hydrostatic pressure if the hypocenters begin at hydrostatic pressure to 6% above hydrostatic pressure with no pre-injection fluid. This finding suggests that underpressured regions such as northern Oklahoma are predisposed to injection-induced earthquakes. Although retrospective and forensic, similar analyses of other potentially induced events and comparisons to natural earthquakes will provide insight into the relative importance of fault orientation, the magnitude of the local stress field, and fluid-pressure migration in intraplate seismicity.
On the distinguishability of HRF models in fMRI.
Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J
2015-01-01
Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.
Robust Statistical Fusion of Image Labels
Landman, Bennett A.; Asman, Andrew J.; Scoggins, Andrew G.; Bogovic, John A.; Xing, Fangxu; Prince, Jerry L.
2011-01-01
Image labeling and parcellation (i.e. assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability. PMID:22010145
NASA Astrophysics Data System (ADS)
Takagi, Hiroshi; Wu, Wenjie
2016-03-01
Even though the maximum wind radius (R
Predicting the stability of endangered stonecats in the LaPlatte River, Vermont
Puchala, Elizabeth A.; Parrish, Donna; Donovan, Therese M.
2016-01-01
Stonecats Noturus flavus in Vermont conform to a rare distribution pattern (as designated by Rabinowitz 1981) because their known distribution within the state is limited to the LaPlatte and Missisquoi rivers. We focused on Stonecats in the LaPlatte River to predict the stability of the population. During 2012–2014, we captured Stonecats via backpack electrofishing; fish were PIT-tagged (>90 mm TL) and marked with visible implant elastomer. Among the 1,671 Stonecats that were captured, 1,252 were PIT-tagged. Only 156 (12%) of the PIT-tagged fish were recaptured, and only 22 of those individuals were recaptured more than once. The Pradel model in Program MARK was used to estimate apparent survival (Φ) and seniority, which were used to derive the rate of population change (λ) for the Stonecat encounter histories we studied. We examined a total of 64 models in our candidate set, with the following covariates: TL at first capture, maximum temperature, season, maximum discharge, and area sampled. Survival estimates were highest in the spring (range of daily Φ = 0.9993–0.9995) and increased with greater TL at first capture. We also estimated increases in capture probability with increasing area sampled. We derived an annual λ of 0.9794, which indicates a slightly decreasing population. However, our λ estimate contained uncertainty that was likely increased due to the low recapture rates. Additional years of data could increase the accuracy of the λ estimate. In the meantime, we have provided insight into Stonecat population parameters that were otherwise unknown.
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
Burns, Douglas A.; Smith, Martyn J.; Freehafer, Douglas A.
2015-12-31
The application uses predictions of future annual precipitation from five climate models and two future greenhouse gas emissions scenarios and provides results that are averaged over three future periods—2025 to 2049, 2050 to 2074, and 2075 to 2099. Results are presented in ensemble form as the mean, median, maximum, and minimum values among the five climate models for each greenhouse gas emissions scenario and period. These predictions of future annual precipitation are substituted into either the precipitation variable or a water balance equation for runoff to calculate potential future peak flows. This application is intended to be used only as an exploratory tool because (1) the regression equations on which the application is based have not been adequately tested outside the range of the current climate and (2) forecasting future precipitation with climate models and downscaling these results to a fine spatial resolution have a high degree of uncertainty. This report includes a discussion of the assumptions, uncertainties, and appropriate use of this exploratory application.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Uncertainty estimation in the determination of metals in superficial water by ICP-OES
NASA Astrophysics Data System (ADS)
Faustino, Mainara G.; Marques, Joyce R.; Monteiro, Lucilena R.; Stellato, Thamiris B.; Soares, Sabrina M. V.; Silva, Tatiane B. S. C.; da Silva, Douglas B.; Pires, Maria Aparecida F.; Cotrim, Marycel E. B.
2016-07-01
From validation studies, it was possible to estimate a measurement uncertainty of several elements such as Al, Ba, Ca, Cu, Cr, Cd, Fe, Mg, Mn, Ni and K in water samples from Guarapiranga Dam. These elements were analyzed by optical emission spectrometry with inductively coupled plasma (ICP-OES). The value of relative estimated uncertainties were between 3% and 15%. The greatest uncertainty contributions were analytical curve, and the recovery method, which were related with elements concentrations and the equipment response. Water samples analyzed were compared with CONAMA Resolution #357/2005.
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Veerman, J Lennert; Barendregt, Jan J; Mackenbach, Johan P
2006-02-01
Consumption of fruits and vegetables is associated with a reduced risk of cardiovascular disease and cancer. The European Union Common Agricultural Policy keeps prices high by limiting the availability of fruits and vegetables. This policy is at odds with public health interests. We assess the potential health gain for the Dutch population of discontinuing EU withdrawal support for fruits and vegetables. The maximum effect of the reform was estimated by assuming that a quantity equivalent to the amount of produce withdrawn in recent years would be brought onto the market. For the calculation of the effect of consumption change on health we constructed a multi-state life table model in which consumption of fruits and vegetables is linked to ischaemic heart disease, stroke, and cancer of the oesophagus, stomach, colorectum, lung and breast. Uncertainty is quantified using Monte Carlo simulation. The reform would maximally increase the average consumption of fruits and vegetables by 1.80% (95% uncertainty interval 1.12-2.73), with an ensuing increase in life expectancy of 3.8 (2.2-5.9) days for men and 2.6 (1.5-4.2) days for women. The reform is also likely to decrease socio-economic inequalities in health. Ending EU withdrawal support for fruits and vegetables could result in a modest health gain for the Dutch population, though uncertainty in the estimates is high. A more comprehensive examination of the health effects of the EU agricultural policy could help to ensure health is duly considered in decision-making.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-05-24
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-01-01
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM’s actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM’s side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990–2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346
Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach
NASA Astrophysics Data System (ADS)
Rodrigues, D. B. B.
2015-12-01
Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.
Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images
NASA Astrophysics Data System (ADS)
Ely, G.; Malcolm, A. E.; Poliannikov, O. V.
2017-12-01
Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.
PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obreschkow, D.; Meyer, M.
2013-11-10
Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
Lucas, P Avilés; Aubineau-Lanièce, I; Lourenço, V; Vermesse, D; Cutarella, D
2014-01-01
The absorbed dose to water is the fundamental reference quantity for brachytherapy treatment planning systems and thermoluminescence dosimeters (TLDs) have been recognized as the most validated detectors for measurement of such a dosimetric descriptor. The detector response in a wide energy spectrum as that of an (192)Ir brachytherapy source as well as the specific measurement medium which surrounds the TLD need to be accounted for when estimating the absorbed dose. This paper develops a methodology based on highly sensitive LiF:Mg,Cu,P TLDs to directly estimate the absorbed dose to water in liquid water around a high dose rate (192)Ir brachytherapy source. Different experimental designs in liquid water and air were constructed to study the response of LiF:Mg,Cu,P TLDs when irradiated in several standard photon beams of the LNE-LNHB (French national metrology laboratory for ionizing radiation). Measurement strategies and Monte Carlo techniques were developed to calibrate the LiF:Mg,Cu,P detectors in the energy interval characteristic of that found when TLDs are immersed in water around an (192)Ir source. Finally, an experimental system was designed to irradiate TLDs at different angles between 1 and 11 cm away from an (192)Ir source in liquid water. Monte Carlo simulations were performed to correct measured results to provide estimates of the absorbed dose to water in water around the (192)Ir source. The dose response dependence of LiF:Mg,Cu,P TLDs with the linear energy transfer of secondary electrons followed the same variations as those of published results. The calibration strategy which used TLDs in air exposed to a standard N-250 ISO x-ray beam and TLDs in water irradiated with a standard (137)Cs beam provided an estimated mean uncertainty of 2.8% (k = 1) in the TLD calibration coefficient for irradiations by the (192)Ir source in water. The 3D TLD measurements performed in liquid water were obtained with a maximum uncertainty of 11% (k = 1) found at 1 cm from the source. Radial dose values in water were compared against published results of the American Association of Physicists in Medicine and the European Society for Radiotherapy and Oncology and no significant differences (maximum value of 3.1%) were found within uncertainties except for one position at 9 cm (5.8%). At this location the background contribution relative to the TLD signal is relatively small and an unexpected experimental fluctuation in the background estimate may have caused such a large discrepancy. This paper shows that reliable measurements with TLDs in complex energy spectra require a study of the detector dose response with the radiation quality and specific calibration methodologies which model accurately the experimental conditions where the detectors will be used. The authors have developed and studied a method with highly sensitive TLDs and contributed to its validation by comparison with results from the literature. This methodology can be used to provide direct estimates of the absorbed dose rate in water for irradiations with HDR (192)Ir brachytherapy sources.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by representational format and personality type, and the influence of format appears to be modifiable and content dependent.
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea,more » Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo, Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO{trademark} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Goebel, Kai
2013-01-01
The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor
NASA Astrophysics Data System (ADS)
Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.
2017-05-01
This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Radiation exposure assessment for portsmouth naval shipyard health studies.
Daniels, R D; Taulbee, T D; Chen, P
2004-01-01
Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.
NASA Astrophysics Data System (ADS)
Kennedy, J. J.; Rayner, N. A.; Smith, R. O.; Parker, D. E.; Saunby, M.
2011-07-01
Changes in instrumentation and data availability have caused time-varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.
Uncertainty in estimates of the number of extraterrestrial civilizations
NASA Technical Reports Server (NTRS)
Sturrock, P. A.
1980-01-01
An estimation of the number N of communicative civilizations is made by means of Drake's formula which involves the combination of several quantities, each of which is to some extent uncertain. It is shown that the uncertainty in any quantity may be represented by a probability distribution function, even if that quantity is itself a probability. The uncertainty of current estimates of N is derived principally from uncertainty in estimates of the lifetime of advanced civilizations. It is argued that this is due primarily to uncertainty concerning the existence of a Galactic Federation which is in turn contingent upon uncertainty about whether the limitations of present-day physics are absolute or (in the event that there exists a yet undiscovered hyperphysics) transient. It is further argued that it is advantageous to consider explicitly these underlying assumptions in order to compare the probable numbers of civilizations operating radio beacons, permitting radio leakage, dispatching probes for radio surveillance for dispatching vehicles for manned surveillance.
Uncertainty of exploitation estimates made from tag returns
Miranda, L.E.; Brock, R.E.; Dorr, B.S.
2002-01-01
Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.
NASA Technical Reports Server (NTRS)
Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William
2017-01-01
Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Hoomans, Ties; Abrams, Keith R; Ament, Andre J H A; Evers, Silvia M A A; Severens, Johan L
2009-10-01
Decision making about resource allocation for guideline implementation to change clinical practice is inevitably undertaken in a context of uncertainty surrounding the cost-effectiveness of both clinical guidelines and implementation strategies. Adopting a total net benefit approach, a model was recently developed to overcome problems with the use of combined ratio statistics when analyzing decision uncertainty. To demonstrate the stochastic application of the model for informing decision making about the adoption of an audit and feedback strategy for implementing a guideline recommending intensive blood glucose control in type 2 diabetes in primary care in the Netherlands. An integrated Bayesian approach to decision modeling and evidence synthesis is adopted, using Markov Chain Monte Carlo simulation in WinBUGs. Data on model parameters is gathered from various sources, with effectiveness of implementation being estimated using pooled, random-effects meta-analysis. Decision uncertainty is illustrated using cost-effectiveness acceptability curves and frontier. Decisions about whether to adopt intensified glycemic control and whether to adopt audit and feedback alter for the maximum values that decision makers are willing to pay for health gain. Through simultaneously incorporating uncertain economic evidence on both guidance and implementation strategy, the cost-effectiveness acceptability curves and cost-effectiveness acceptability frontier show an increase in decision uncertainty concerning guideline implementation. The stochastic application in diabetes care demonstrates that the model provides a simple and useful tool for quantifying and exploring the (combined) uncertainty associated with decision making about adopting guidelines and implementation strategies and, therefore, for informing decisions about efficient resource allocation to change clinical practice.
NASA Astrophysics Data System (ADS)
Delottier, H.; Pryet, A.; Lemieux, J. M.; Dupuy, A.
2018-05-01
Specific yield and groundwater recharge of unconfined aquifers are both essential parameters for groundwater modeling and sustainable groundwater development, yet the collection of reliable estimates of these parameters remains challenging. Here, a joint approach combining an aquifer test with application of the water-table fluctuation (WTF) method is presented to estimate these parameters and quantify their uncertainty. The approach requires two wells: an observation well instrumented with a pressure probe for long-term monitoring and a pumping well, located in the vicinity, for the aquifer test. The derivative of observed drawdown levels highlights the necessity to represent delayed drainage from the unsaturated zone when interpreting the aquifer test results. Groundwater recharge is estimated with an event-based WTF method in order to minimize the transient effects of flow dynamics in the unsaturated zone. The uncertainty on groundwater recharge is obtained by the propagation of the uncertainties on specific yield (Bayesian inference) and groundwater recession dynamics (regression analysis) through the WTF equation. A major portion of the uncertainty on groundwater recharge originates from the uncertainty on the specific yield. The approach was applied to a site in Bordeaux (France). Groundwater recharge was estimated to be 335 mm with an associated uncertainty of 86.6 mm at 2σ. By the use of cost-effective instrumentation and parsimonious methods of interpretation, the replication of such a joint approach should be encouraged to provide reliable estimates of specific yield and groundwater recharge over a region of interest. This is necessary to reduce the predictive uncertainty of groundwater management models.
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Lognormal Uncertainty Estimation for Failure Rates
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2015-05-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
NASA Technical Reports Server (NTRS)
Genovese, Christopher R.; Stark, Philip B.; Thompson, Michael J.
1995-01-01
Observed solar p-mode frequency splittings can be used to estimate angular velocity as a function of position in the solar interior. Formal uncertainties of such estimates depend on the method of estimation (e.g., least-squares), the distribution of errors in the observations, and the parameterization imposed on the angular velocity. We obtain lower bounds on the uncertainties that do not depend on the method of estimation; the bounds depend on an assumed parameterization, but the fact that they are lower bounds for the 'true' uncertainty does not. Ninety-five percent confidence intervals for estimates of the angular velocity from 1986 Big Bear Solar Observatory (BBSO) data, based on a 3659 element tensor-product cubic-spline parameterization, are everywhere wider than 120 nHz, and exceed 60,000 nHz near the core. When compared with estimates of the solar rotation, these bounds reveal that useful inferences based on pointwise estimates of the angular velocity using 1986 BBSO splitting data are not feasible over most of the Sun's volume. The discouraging size of the uncertainties is due principally to the fact that helioseismic measurements are insensitive to changes in the angular velocity at individual points, so estimates of point values based on splittings are extremely uncertain. Functionals that measure distributed 'smooth' properties are, in general, better constrained than estimates of the rotation at a point. For example, the uncertainties in estimated differences of average rotation between adjacent blocks of about 0.001 solar volumes across the base of the convective zone are much smaller, and one of several estimated differences we compute appears significant at the 95% level.
NASA Astrophysics Data System (ADS)
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
Li, Zhengpeng; Liu, Shuguang; Zhang, Xuesong; West, Tristram O.; Ogle, Stephen M.; Zhou, Naijun
2016-01-01
Quantifying spatial and temporal patterns of carbon sources and sinks and their uncertainties across agriculture-dominated areas remains challenging for understanding regional carbon cycles. Characteristics of local land cover inputs could impact the regional carbon estimates but the effect has not been fully evaluated in the past. Within the North American Carbon Program Mid-Continent Intensive (MCI) Campaign, three models were developed to estimate carbon fluxes on croplands: an inventory-based model, the Environmental Policy Integrated Climate (EPIC) model, and the General Ensemble biogeochemical Modeling System (GEMS) model. They all provided estimates of three major carbon fluxes on cropland: net primary production (NPP), net ecosystem production (NEP), and soil organic carbon (SOC) change. Using data mining and spatial statistics, we studied the spatial distribution of the carbon fluxes uncertainties and the relationships between the uncertainties and the land cover characteristics. Results indicated that uncertainties for all three carbon fluxes were not randomly distributed, but instead formed multiple clusters within the MCI region. We investigated the impacts of three land cover characteristics on the fluxes uncertainties: cropland percentage, cropland richness and cropland diversity. The results indicated that cropland percentage significantly influenced the uncertainties of NPP and NEP, but not on the uncertainties of SOC change. Greater uncertainties of NPP and NEP were found in counties with small cropland percentage than the counties with large cropland percentage. Cropland species richness and diversity also showed negative correlations with the model uncertainties. Our study demonstrated that the land cover characteristics contributed to the uncertainties of regional carbon fluxes estimates. The approaches we used in this study can be applied to other ecosystem models to identify the areas with high uncertainties and where models can be improved to reduce overall uncertainties for regional carbon flux estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, J.; Moteabbed, M.; Paganetti, H., E-mail: hpaganetti@mgh.harvard.edu
2015-01-15
Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagationmore » was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio of absolute risks between two modalities is less sensitive to the uncertainties in the risk model and can provide statistically significant estimates.« less
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi
2016-11-01
This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.
NASA Astrophysics Data System (ADS)
Poppeliers, C.; Preston, L. A.
2017-12-01
Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2012-01-01
The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20–549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. PMID:22487046
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul; Al Hassan, Mohammad; Ring, Robert
2017-01-01
Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
NASA Astrophysics Data System (ADS)
Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.
2013-12-01
There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)
Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.
2013-01-01
To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.
We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice overmore » the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high-order renormalization-scale dependence plus the uncertainty from the factorization scale. The PMC predictions show better agreement with the ATLAS measurements than the LHC Higgs Cross Section Working Group predictions which are based on conventional renormalization-scale setting.« less
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-15
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Section summary: Uncertainty and design considerations
Stephen Hagen
2013-01-01
Well planned sampling designs and robust approaches to estimating uncertainty are critical components of forest monitoring. The importance of uncertainty estimation increases as deforestation and degradation issues become more closely tied to financing incentives for reducing greenhouse gas emissions in the forest sector. Investors like to know risk and risk is tightly...
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
NASA Astrophysics Data System (ADS)
Munoz-Jaramillo, Andres
2017-08-01
Data products in heliospheric physics are very often provided without clear estimates of uncertainty. From helioseismology in the solar interior, all the way to in situ solar wind measurements beyond 1AU, uncertainty estimates are typically hard for users to find (buried inside long documents that are separate from the data products), or simply non-existent.There are two main reasons why uncertainty measurements are hard to find:1. Understanding instrumental systematic errors is given a much higher priority inside instrumental teams.2. The desire to perfectly understand all sources of uncertainty postpones indefinitely the actual quantification of uncertainty in our measurements.Using the cross calibration of 200 years of sunspot area measurements as a case study, in this presentation we will discuss the negative impact that inadequate measurements of uncertainty have on users, through the appearance of toxic and unnecessary controversies, and data providers, through the creation of unrealistic expectations regarding the information that can be extracted from their data. We will discuss how empirical estimates of uncertainty represent a very good alternative to not providing any estimates at all, and finalize by discussing the bare essentials that should become our standard practice for future instruments and surveys.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Griscom, Bronson W; Ellis, Peter W; Baccini, Alessandro; Marthinus, Delon; Evans, Jeffrey S; Ruslandi
2016-01-01
Forest conservation efforts are increasingly being implemented at the scale of sub-national jurisdictions in order to mitigate global climate change and provide other ecosystem services. We see an urgent need for robust estimates of historic forest carbon emissions at this scale, as the basis for credible measures of climate and other benefits achieved. Despite the arrival of a new generation of global datasets on forest area change and biomass, confusion remains about how to produce credible jurisdictional estimates of forest emissions. We demonstrate a method for estimating the relevant historic forest carbon fluxes within the Regency of Berau in eastern Borneo, Indonesia. Our method integrates best available global and local datasets, and includes a comprehensive analysis of uncertainty at the regency scale. We find that Berau generated 8.91 ± 1.99 million tonnes of net CO2 emissions per year during 2000-2010. Berau is an early frontier landscape where gross emissions are 12 times higher than gross sequestration. Yet most (85%) of Berau's original forests are still standing. The majority of net emissions were due to conversion of native forests to unspecified agriculture (43% of total), oil palm (28%), and fiber plantations (9%). Most of the remainder was due to legal commercial selective logging (17%). Our overall uncertainty estimate offers an independent basis for assessing three other estimates for Berau. Two other estimates were above the upper end of our uncertainty range. We emphasize the importance of including an uncertainty range for all parameters of the emissions equation to generate a comprehensive uncertainty estimate-which has not been done before. We believe comprehensive estimates of carbon flux uncertainty are increasingly important as national and international institutions are challenged with comparing alternative estimates and identifying a credible range of historic emissions values.
Impact of uncertainty in soil, climatic, and chemical information in a pesticide leaching assessment
NASA Astrophysics Data System (ADS)
Loague, Keith; Green, Richard E.; Giambelluca, Thomas W.; Liang, Tony C.; Yost, Russell S.
1990-01-01
A simple mobility index, when combined with a geographic information system, can be used to generate rating maps which indicate qualitatively the potential for various organic chemicals to leach to groundwater. In this paper we investigate the magnitude of uncertainty associated with pesticide mobility estimates as a result of data uncertainties. Our example is for the Pearl Harbor Basin, Oahu, Hawaii. The two pesticides included in our analysis are atrazine (2-chloro-4-ethylamino-6-isopropylamino-s-triazine) and diuron [3-(3,4-dichlorophenyul)-1,1-dimethylarea]. The mobility index used here is known as the Attenuation Factor ( AF); it requires soil, hydrogeologic, climatic and chemical information as input data. We employ first-order uncertainty analysis to characterize the uncertainty in estimates of AF resulting from uncertainties in the various input data. Soils in the Pearl Harbor Basin are delineated at the order taxonomic category for this study. Our results show that there can be a significant amount of uncertainty in estimates of pesticide mobility for the Pearl Harbor Basin. This information needs to be considered if future decisions concerning chemical regulation are to be based on estimates of pesticide mobility determined from simple indices.
NASA Astrophysics Data System (ADS)
Loague, Keith; Green, Richard E.; Giambelluca, Thomas W.; Liang, Tony C.; Yost, Russell S.
2016-11-01
A simple mobility index, when combined with a geographic information system, can be used to generate rating maps which indicate qualitatively the potential for various organic chemicals to leach to groundwater. In this paper we investigate the magnitude of uncertainty associated with pesticide mobility estimates as a result of data uncertainties. Our example is for the Pearl Harbor Basin, Oahu, Hawaii. The two pesticides included in our analysis are atrazine (2-chloro-4-ethylamino-6-isopropylamino-s-triazine) and diuron [3-(3,4-dichlorophenyl)-1,1-dimethylarea]. The mobility index used here is known as the Attenuation Factor (AF); it requires soil, hydrogeologic, climatic, and chemical information as input data. We employ first-order uncertainty analysis to characterize the uncertainty in estimates of AF resulting from uncertainties in the various input data. Soils in the Pearl Harbor Basin are delineated at the order taxonomic category for this study. Our results show that there can be a significant amount of uncertainty in estimates of pesticide mobility for the Pearl Harbor Basin. This information needs to be considered if future decisions concerning chemical regulation are to be based on estimates of pesticide mobility determined from simple indices.
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions
NASA Technical Reports Server (NTRS)
Barghouty, Abdulnasser F.
2007-01-01
On the surface of the moon -and not only during heightened solar activities- the radiation environment As such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three (galactic, solar, and fission) radiation sources am employed in a 1-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar-regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty -mainly in lunar regolith attenuation properties in addition to the radiation quality factor- can easily defeat this and similar optimization schemes.
Optimization of Crew Shielding Requirement in Reactor-Powered Lunar Surface Missions
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2007-01-01
On the surface of the moon and not only during heightened solar activities the radiation environment is such that crew protection will be required for missions lasting in excess of six months. This study focuses on estimating the optimized crew shielding requirement for lunar surface missions with a nuclear option. Simple, transport-simulation based dose-depth relations of the three radiation sources (galactic, solar, and fission) are employed in a one-dimensional optimization scheme. The scheme is developed to estimate the total required mass of lunar regolith separating reactor from crew. The scheme was applied to both solar maximum and minimum conditions. It is shown that savings of up to 30% in regolith mass can be realized. It is argued, however, that inherent variation and uncertainty mainly in lunar regolith attenuation properties in addition to the radiation quality factor can easily defeat this and similar optimization schemes.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
Puncher, M; Zhang, W; Harrison, J D; Wakeford, R
2017-06-26
Assessments of risk to a specific population group resulting from internal exposure to a particular radionuclide can be used to assess the reliability of the appropriate International Commission on Radiological Protection (ICRP) dose coefficients used as a radiation protection device for the specified exposure pathway. An estimate of the uncertainty on the associated risk is important for informing judgments on reliability; a derived uncertainty factor, UF, is an estimate of the 95% probable geometric difference between the best risk estimate and the nominal risk and is a useful tool for making this assessment. This paper describes the application of parameter uncertainty analysis to quantify uncertainties resulting from internal exposures to radioiodine by members of the public, specifically 1, 10 and 20-year old females from the population of England and Wales. Best estimates of thyroid cancer incidence risk (lifetime attributable risk) are calculated for ingestion or inhalation of 129 I and 131 I, accounting for uncertainties in biokinetic model and cancer risk model parameter values. These estimates are compared with the equivalent ICRP derived nominal age-, sex- and population-averaged estimates of excess thyroid cancer incidence to obtain UFs. Derived UF values for ingestion or inhalation of 131 I for 1 year, 10-year and 20-year olds are around 28, 12 and 6, respectively, when compared with ICRP Publication 103 nominal values, and 9, 7 and 14, respectively, when compared with ICRP Publication 60 values. Broadly similar results were obtained for 129 I. The uncertainties on risk estimates are largely determined by uncertainties on risk model parameters rather than uncertainties on biokinetic model parameters. An examination of the sensitivity of the results to the risk models and populations used in the calculations show variations in the central estimates of risk of a factor of around 2-3. It is assumed that the direct proportionality of excess thyroid cancer risk and dose observed at low to moderate acute doses and incorporated in the risk models also applies to very small doses received at very low dose rates; the uncertainty in this assumption is considerable, but largely unquantifiable. The UF values illustrate the need for an informed approach to the use of ICRP dose and risk coefficients.
Friesz, Paul J.
2012-01-01
Three river basins in central Rhode Island-the Hunt River, the Annaquatucket River, and the Pettaquamscutt River-contain 15 production wells clustered in 4 pumping centers from which drinking water is withdrawn. These high-capacity production wells, operated by three water suppliers, are screened in coarse-grained deposits of glacial origin. The risk of contaminating water withdrawn by these well centers may be reduced if the areas contributing recharge to the well centers are delineated and these areas protected from land uses that may affect the water quality. The U.S. Geological Survey, in cooperation with the Rhode Island Department of Health, began an investigation in 2009 to improve the understanding of groundwater flow and delineate areas contributing recharge to the well centers as part of an effort to protect the source of water to these well centers. A groundwater-flow model was calibrated by inverse modeling using nonlinear regression to obtain the optimal set of parameter values, which provide a single, best representation of the area contributing recharge to a well center. Summary statistics from the calibrated model were used to evaluate the uncertainty associated with the predicted areas contributing recharge to the well centers. This uncertainty analysis was done so that the contributing areas to the well centers would not be underestimated, thereby leaving the well centers inadequately protected. The analysis led to contributing areas expressed as a probability distribution (probabilistic contributing areas) that differ from a single or deterministic contributing area. Groundwater flow was simulated in the surficial deposits and the underlying bedrock in the 47-square-mile study area. Observations (165 groundwater levels and 7 base flows) provided sufficient information to estimate parameters representing recharge and horizontal hydraulic conductivity of the glacial deposits and hydraulic conductance of streambeds. The calibrated value for recharge to valley-fill deposits was 27.3 inches per year (in/yr) and to upland till deposits was 18.7 in/yr. Calibrated values for horizontal hydraulic conductivity of the valley-fill deposits ranged from 20 to 480 feet per day (ft/d) and of the upland till deposits was 16.2 ft/d. Calibrated values of streambed hydraulic conductance ranged from 10,000 to 52,000 feet squared per day. Values of recharge and horizontal hydraulic conductivity of the valley-fill deposits were the most precisely estimated, whereas the horizontal hydraulic conductivity of till deposits was the least precisely estimated. Simulated areas contributing recharge to the well centers on the basis of the calibrated model ranged from 0.19 to 1.12 square miles (mi2) and covered a total area of 2.79 mi2 for average well center withdrawal rates during 2004-08 (235 to 1,858 gallons per minute (gal/min)). Simulated areas contributing recharge for the maximum well center pumping capacities (800 to 8,500 gal/min) ranged from 0.37 to 3.53 mi2 and covered a total area of 7.99 mi2 in the modeled area. Simulated areas contributing recharge extend upgradient of the well centers to upland till and to groundwater divides. Some areas contributing recharge include small, isolated areas remote from the well centers. Relatively short groundwater traveltimes from recharging locations to discharging wells indicated the wells are vulnerable to contamination from land-surface activities: median traveltimes ranged from 2.9 to 5.0 years for the well centers, and 78 to 93 percent of the traveltimes were 10 years or less for the well centers. Land cover in the areas contributing recharge includes a substantial amount of urban land use for the two well centers in the Hunt River Basin, agriculture and sand and gravel mining uses for the well center in the Annaquatucket River Basin, and, for the well center in the Pettaquamscutt River Basin, land use is primarily undeveloped. Model-prediction uncertainty was evaluated using a Monte Carlo analysis. The parameter variance-covariance matrix from nonlinear regression was used to create parameter sets that reflect the uncertainty of the parameter estimates and the correlation among parameters. The remaining parameters representing the glacial deposits (vertical anisotropy of valley-fill deposits and of till deposits, maximum groundwater evapotranspiration, and hydraulic conductance for headdependent cells representing a groundwater divide) that could not be estimated with nonlinear regression were incorporated into the variance-covariance matrix using prior information on parameters. Thus the uncertainty analysis was an outcome of calibrating the parameters to available observations and to information that the modeler provided. A water budget and model-fit statistical criteria were used to assess parameter sets so that prediction uncertainty was not overestimated. Because of the effects of parameter uncertainty, the size of the probabilistic contributing areas for each well center for both average and maximum pumping rates was larger than the size of the deterministic contributing areas for the well center. Thus, some areas not in the deterministic contributing area may actually be in the contributing area, including additional areas of urban and agricultural land use. Generally, areas closest to the well centers with short groundwater traveltimes are associated with higher probabilities, whereas areas distant from the well centers with long groundwater traveltimes are associated with lower probabilities. The deterministic contributing areas generally corresponded to areas associated with high probabilities (greater than 50 percent). Areas associated with low probabilities extended long distances along groundwater divides in the uplands remote from the well centers.
Uncertainty information in climate data records from Earth observation
NASA Astrophysics Data System (ADS)
Merchant, C. J.
2017-12-01
How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is demonstrating metrologically sound methodologies addressing this problem for four key historical CDRs. FIDUCEO methods of uncertainty analysis (which also tend to lead to improved FCDRs and CDRs) could support coherent treatment of uncertainty across FCDRs to CDRs and higher level products for a wide range of essential climate variables.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
Uncertainty in predicting soil hydraulic properties at the hillslope scale with indirect methods
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2007-02-01
SummarySeveral hydrological applications require the characterisation of the soil hydraulic properties at large spatial scales. Pedotransfer functions (PTFs) are being developed as simplified methods to estimate soil hydraulic properties as an alternative to direct measurements, which are unfeasible for most practical circumstances. The objective of this study is to quantify the uncertainty in PTFs spatial predictions at the hillslope scale as related to the sampling density, due to: (i) the error in estimated soil physico-chemical properties and (ii) PTF model error. The analysis is carried out on a 2-km-long experimental hillslope in South Italy. The method adopted is based on a stochastic generation of patterns of soil variables using sequential Gaussian simulation, conditioned to the observed sample data. The following PTFs are applied: Vereecken's PTF [Vereecken, H., Diels, J., van Orshoven, J., Feyen, J., Bouma, J., 1992. Functional evaluation of pedotransfer functions for the estimation of soil hydraulic properties. Soil Sci. Soc. Am. J. 56, 1371-1378] and HYPRES PTF [Wösten, J.H.M., Lilly, A., Nemes, A., Le Bas, C., 1999. Development and use of a database of hydraulic properties of European soils. Geoderma 90, 169-185]. The two PTFs estimate reliably the soil water retention characteristic even for a relatively coarse sampling resolution, with prediction uncertainties comparable to the uncertainties in direct laboratory or field measurements. The uncertainty of soil water retention prediction due to the model error is as much as or more significant than the uncertainty associated with the estimated input, even for a relatively coarse sampling resolution. Prediction uncertainties are much more important when PTF are applied to estimate the saturated hydraulic conductivity. In this case model error dominates the overall prediction uncertainties, making negligible the effect of the input error.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Osibanjo, Olabosipo O.
The objectives of this work are to calculate surface fluxes for rolling terrain using observational data collected during one week in September 2014 from a monitoring site in Echo, Oregon and to investigate the log law in the ABL. The site is located in the Columbia Basin with rolling terrain, irrigated farmland, and over 100 wind turbines. The 10 m tower was placed in a small valley depression to isolate nighttime temperature inversions. This thesis presents observations of momentum, sensible heat, moisture, and CO2 fluxes from data collected at a sampling frequency of 10Hz at four heights. Results show a strong correlation between temperature inversions and CO 2 flux. The log layer could not be achieved as the value of the estimated von Karman constant (˜0.62) is not close to that of the accepted value of 0.41. The impact of the irrigated farmland near the measurement site was observed in the latent heat flux, where the advection of moisture was evident in the tower moisture gradient. A strong relationship was also observed between fluxes of sensible heat, latent heat, CO2, and atmospheric stability. The average nighttime CO2 concentration observed was ˜407 ppm, and daytime ˜388 ppm compared to the 2013 global average CO2 concentration of 395 ppm. The maximum CO2 concentration (˜485 ppm) was observed on the strongest temperature inversion night. There are few uncertainties in the measurements. The manufacturer for the eddy covariance instruments (EC 150) quotes uncertainty of +/- 0.1°C for temperature between -0°C-40°C. Error bars were generated on the estimated surface sensible heat flux using the standard deviation and mean values. Under the most stable atmospheric conditions, uncertainty (assumed to be the variability in the flux estimates) was close to the minimum (˜+/- 5 W m-2). (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
A review of the generalized uncertainty principle.
Tawfik, Abdel Nasser; Diab, Abdel Magied
2015-12-01
Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
An improved non-Markovian degradation model with long-term dependency and item-to-item uncertainty
NASA Astrophysics Data System (ADS)
Xi, Xiaopeng; Chen, Maoyin; Zhang, Hanwen; Zhou, Donghua
2018-05-01
It is widely noted in the literature that the degradation should be simplified into a memoryless Markovian process for the purpose of predicting the remaining useful life (RUL). However, there actually exists the long-term dependency in the degradation processes of some industrial systems, including electromechanical equipments, oil tankers, and large blast furnaces. This implies the new degradation state depends not only on the current state, but also on the historical states. Such dynamic systems cannot be accurately described by traditional Markovian models. Here we present an improved non-Markovian degradation model with both the long-term dependency and the item-to-item uncertainty. As a typical non-stationary process with dependent increments, fractional Brownian motion (FBM) is utilized to simulate the fractal diffusion of practical degradations. The uncertainty among multiple items can be represented by a random variable of the drift. Based on this model, the unknown parameters are estimated through the maximum likelihood (ML) algorithm, while a closed-form solution to the RUL distribution is further derived using a weak convergence theorem. The practicability of the proposed model is fully verified by two real-world examples. The results demonstrate that the proposed method can effectively reduce the prediction error.
Dealing with uncertainty in the probability of overtopping of a flood mitigation dam
NASA Astrophysics Data System (ADS)
Michailidi, Eleni Maria; Bacchi, Baldassare
2017-05-01
In recent years, copula multivariate functions were used to model, probabilistically, the most important variables of flood events: discharge peak, flood volume and duration. However, in most of the cases, the sampling uncertainty, from which small-sized samples suffer, is neglected. In this paper, considering a real reservoir controlled by a dam as a case study, we apply a structure-based approach to estimate the probability of reaching specific reservoir levels, taking into account the key components of an event (flood peak, volume, hydrograph shape) and of the reservoir (rating curve, volume-water depth relation). Additionally, we improve information about the peaks from historical data and reports through a Bayesian framework, allowing the incorporation of supplementary knowledge from different sources and its associated error. As it is seen here, the extra information can result in a very different inferred parameter set and consequently this is reflected as a strong variability of the reservoir level, associated with a given return period. Most importantly, the sampling uncertainty is accounted for in both cases (single-site and multi-site with historical information scenarios), and Monte Carlo confidence intervals for the maximum water level are calculated. It is shown that water levels of specific return periods in a lot of cases overlap, thus making risk assessment, without providing confidence intervals, deceiving.
Study of the uncertainty in estimation of the exposure of non-human biota to ionising radiation.
Avila, R; Beresford, N A; Agüero, A; Broed, R; Brown, J; Iospje, M; Robles, B; Suañez, A
2004-12-01
Uncertainty in estimations of the exposure of non-human biota to ionising radiation may arise from a number of sources including values of the model parameters, empirical data, measurement errors and biases in the sampling. The significance of the overall uncertainty of an exposure assessment will depend on how the estimated dose compares with reference doses used for risk characterisation. In this paper, we present the results of a study of the uncertainty in estimation of the exposure of non-human biota using some of the models and parameters recommended in the FASSET methodology. The study was carried out for semi-natural terrestrial, agricultural and marine ecosystems, and for four radionuclides (137Cs, 239Pu, 129I and 237Np). The parameters of the radionuclide transfer models showed the highest sensitivity and contributed the most to the uncertainty in the predictions of doses to biota. The most important ones were related to the bioavailability and mobility of radionuclides in the environment, for example soil-to-plant transfer factors, the bioaccumulation factors for marine biota and the gut uptake fraction for terrestrial mammals. In contrast, the dose conversion coefficients showed low sensitivity and contributed little to the overall uncertainty. Radiobiological effectiveness contributed to the overall uncertainty of the dose estimations for alpha emitters although to a lesser degree than a number of transfer model parameters.
NASA Astrophysics Data System (ADS)
Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.
2018-01-01
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.
Di Vittorio, A. V.; Mao, J.; Shi, X.; ...
2018-01-03
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vittorio, A. V.; Mao, J.; Shi, X.
Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. In this paper, we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO 2 in 2004, and generates carbon uncertainty that is equivalentmore » to 80% of the net effects of CO 2 and climate and 124% of the effects of nitrogen deposition during 1850–2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. Finally, we conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.« less
Agriculture-driven deforestation in the tropics from 1990-2015: emissions, trends and uncertainties
NASA Astrophysics Data System (ADS)
Carter, Sarah; Herold, Martin; Avitabile, Valerio; de Bruin, Sytze; De Sy, Veronique; Kooistra, Lammert; Rufino, Mariana C.
2018-01-01
Limited data exists on emissions from agriculture-driven deforestation, and available data are typically uncertain. In this paper, we provide comparable estimates of emissions from both all deforestation and agriculture-driven deforestation, with uncertainties for 91 countries across the tropics between 1990 and 2015. Uncertainties associated with input datasets (activity data and emissions factors) were used to combine the datasets, where most certain datasets contribute the most. This method utilizes all the input data, while minimizing the uncertainty of the emissions estimate. The uncertainty of input datasets was influenced by the quality of the data, the sample size (for sample-based datasets), and the extent to which the timeframe of the data matches the period of interest. Area of deforestation, and the agriculture-driver factor (extent to which agriculture drives deforestation), were the most uncertain components of the emissions estimates, thus improvement in the uncertainties related to these estimates will provide the greatest reductions in uncertainties of emissions estimates. Over the period of the study, Latin America had the highest proportion of deforestation driven by agriculture (78%), and Africa had the lowest (62%). Latin America had the highest emissions from agriculture-driven deforestation, and these peaked at 974 ± 148 Mt CO2 yr-1 in 2000-2005. Africa saw a continuous increase in emissions between 1990 and 2015 (from 154 ± 21-412 ± 75 Mt CO2 yr-1), so mitigation initiatives could be prioritized there. Uncertainties for emissions from agriculture-driven deforestation are ± 62.4% (average over 1990-2015), and uncertainties were highest in Asia and lowest in Latin America. Uncertainty information is crucial for transparency when reporting, and gives credibility to related mitigation initiatives. We demonstrate that uncertainty data can also be useful when combining multiple open datasets, so we recommend new data providers to include this information.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
NASA Astrophysics Data System (ADS)
Swarnkar, Somil; Malini, Anshu; Tripathi, Shivam; Sinha, Rajiv
2018-04-01
High soil erosion and excessive sediment load are serious problems in several Himalayan river basins. To apply mitigation procedures, precise estimation of soil erosion and sediment yield with associated uncertainties are needed. Here, the revised universal soil loss equation (RUSLE) and the sediment delivery ratio (SDR) equations are used to estimate the spatial pattern of soil erosion (SE) and sediment yield (SY) in the Garra River basin, a small Himalayan tributary of the River Ganga. A methodology is proposed for quantifying and propagating uncertainties in SE, SDR and SY estimates. Expressions for uncertainty propagation are derived by first-order uncertainty analysis, making the method viable even for large river basins. The methodology is applied to investigate the relative importance of different RUSLE factors in estimating the magnitude and uncertainties in SE over two distinct morphoclimatic regimes of the Garra River basin, namely the upper mountainous region and the lower alluvial plains. Our results suggest that average SE in the basin is very high (23 ± 4.7 t ha-1 yr-1) with higher values in the upper mountainous region (92 ± 15.2 t ha-1 yr-1) compared to the lower alluvial plains (19.3 ± 4 t ha-1 yr-1). Furthermore, the topographic steepness (LS) and crop practice (CP) factors exhibit higher uncertainties than other RUSLE factors. The annual average SY is estimated at two locations in the basin - Nanak Sagar Dam (NSD) for the period 1962-2008 and Husepur gauging station (HGS) for 1987-2002. The SY at NSD and HGS are estimated to be 6.9 ± 1.2 × 105 t yr-1 and 6.7 ± 1.4 × 106 t yr-1, respectively, and the estimated 90 % interval contains the observed values of 6.4 × 105 t yr-1 and 7.2 × 106 t yr-1, respectively. The study demonstrated the usefulness of the proposed methodology for quantifying uncertainty in SE and SY estimates at ungauged basins.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling of structural uncertainties in Reynolds-averaged Navier-Stokes closures
NASA Astrophysics Data System (ADS)
Emory, Michael; Larsson, Johan; Iaccarino, Gianluca
2013-11-01
Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.